aidatatools/ollama-benchmark

LLM Benchmark for Throughput via Ollama (Local LLMs)

53
/ 100
Established

This tool helps you quickly understand the real performance of your local Large Language Models (LLMs) running via Ollama. It takes your existing local LLM setup and provides a clear tokens-per-second metric. AI/ML practitioners, researchers, or anyone experimenting with local LLMs can use this to assess different models and hardware configurations.

345 stars.

Use this if you need to measure the raw inference speed (throughput) of various LLMs on your local machine to compare performance or optimize your setup.

Not ideal if you are looking to benchmark the accuracy, quality, or specific application performance of an LLM, as this tool focuses solely on throughput.

local-LLMs machine-learning-operations AI-performance-tuning model-evaluation LLM-deployment
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

345

Forks

41

Language

Python

License

MIT

Last pushed

Jan 17, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/aidatatools/ollama-benchmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.