AnkitNayak-eth/llmBench
llmBench is a high-depth benchmarking tool designed to measure the raw performance of local LLM runtimes (Ollama, llama.cpp) while providing deep hardware intelligence.
This tool helps you understand how well your local AI models (like those running on Ollama or llama.cpp) are performing on your computer's hardware. It takes information about your local AI setup and your computer's components to show you detailed metrics and even compare your performance against global AI model benchmarks. This is ideal for AI engineers, data scientists, or anyone setting up and managing local large language models.
Use this if you need to deeply analyze the performance of your local large language models and understand how your hardware impacts their speed and efficiency.
Not ideal if you are only interested in using cloud-based AI models or don't need detailed hardware and performance insights for local setups.
Stars
24
Forks
3
Language
Python
License
MIT
Category
Last pushed
Mar 15, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AnkitNayak-eth/llmBench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
stanfordnlp/axbench
Stanford NLP Python library for benchmarking the utility of LLM interpretability methods
aidatatools/ollama-benchmark
LLM Benchmark for Throughput via Ollama (Local LLMs)
LarHope/ollama-benchmark
Ollama based Benchmark with detail I/O token per second. Python with Deepseek R1 example.
qcri/LLMeBench
Benchmarking Large Language Models
THUDM/LongBench
LongBench v2 and LongBench (ACL 25'&24')