ollama-benchmark and llmBench
These two tools are competitors, as both aim to measure the performance and efficiency of LLM workloads, particularly those run locally with frameworks like Ollama, making it likely a user would choose one over the other for a given benchmarking task.
About ollama-benchmark
cloudmercato/ollama-benchmark
Handy tool to measure the performance and efficiency of LLMs workloads.
This tool helps AI engineers and researchers assess how well their Ollama-hosted large language models (LLMs) are performing. It takes various LLM models and test parameters as input and outputs detailed performance metrics like response speed, embedding generation time, and even the quality of answers. You can use it to compare different models or optimize a single model's setup for specific tasks.
About llmBench
AnkitNayak-eth/llmBench
llmBench is a high-depth benchmarking tool designed to measure the raw performance of local LLM runtimes (Ollama, llama.cpp) while providing deep hardware intelligence.
This tool helps you understand how well your local AI models (like those running on Ollama or llama.cpp) are performing on your computer's hardware. It takes information about your local AI setup and your computer's components to show you detailed metrics and even compare your performance against global AI model benchmarks. This is ideal for AI engineers, data scientists, or anyone setting up and managing local large language models.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work