ollama-benchmark and llmBench
These tools are competitors, as both aim to provide throughput and raw performance benchmarking for local LLM runtimes like Ollama and llama.cpp.
About ollama-benchmark
aidatatools/ollama-benchmark
LLM Benchmark for Throughput via Ollama (Local LLMs)
This tool helps you quickly understand the real performance of your local Large Language Models (LLMs) running via Ollama. It takes your existing local LLM setup and provides a clear tokens-per-second metric. AI/ML practitioners, researchers, or anyone experimenting with local LLMs can use this to assess different models and hardware configurations.
About llmBench
AnkitNayak-eth/llmBench
llmBench is a high-depth benchmarking tool designed to measure the raw performance of local LLM runtimes (Ollama, llama.cpp) while providing deep hardware intelligence.
This tool helps you understand how well your local AI models (like those running on Ollama or llama.cpp) are performing on your computer's hardware. It takes information about your local AI setup and your computer's components to show you detailed metrics and even compare your performance against global AI model benchmarks. This is ideal for AI engineers, data scientists, or anyone setting up and managing local large language models.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work