srikanth235/benchllama

Benchmark your local LLMs.

40
/ 100
Emerging

This tool helps AI developers and researchers choose the best local large language models (LLMs) for specific tasks. It takes an LLM served via Ollama and an optional dataset, then outputs performance and quality metrics, including pass@k for coding models. It's designed for individuals managing and optimizing local LLM deployments.

No commits in the last 6 months. Available on PyPI.

Use this if you need to compare and select the most effective local LLMs for your applications, especially when evaluating code generation capabilities across different programming languages.

Not ideal if you are working with cloud-based LLMs or models not served through Ollama, or if your primary focus is on fine-tuning models rather than comparative benchmarking.

LLM-benchmarking model-evaluation local-AI-development code-generation AI-model-selection
Stale 6m
Maintenance 0 / 25
Adoption 8 / 25
Maturity 25 / 25
Community 7 / 25

How are scores calculated?

Stars

53

Forks

3

Language

Python

License

MIT

Last pushed

Aug 26, 2024

Commits (30d)

0

Dependencies

5

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/srikanth235/benchllama"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.