AnkitNayak-eth/llmBench

llmBench is a high-depth benchmarking tool designed to measure the raw performance of local LLM runtimes (Ollama, llama.cpp) while providing deep hardware intelligence.

38
/ 100
Emerging

This tool helps you understand how well your local AI models (like those running on Ollama or llama.cpp) are performing on your computer's hardware. It takes information about your local AI setup and your computer's components to show you detailed metrics and even compare your performance against global AI model benchmarks. This is ideal for AI engineers, data scientists, or anyone setting up and managing local large language models.

Use this if you need to deeply analyze the performance of your local large language models and understand how your hardware impacts their speed and efficiency.

Not ideal if you are only interested in using cloud-based AI models or don't need detailed hardware and performance insights for local setups.

AI engineering LLM deployment hardware optimization performance benchmarking local AI development
No Package No Dependents
Maintenance 13 / 25
Adoption 6 / 25
Maturity 9 / 25
Community 10 / 25

How are scores calculated?

Stars

24

Forks

3

Language

Python

License

MIT

Last pushed

Mar 15, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AnkitNayak-eth/llmBench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.