rohanelukurthy/rig-rank

A Go CLI tool to benchmark local LLMs via Ollama, measuring Time To First Token (TTFT) and throughput on your specific hardware.

36
/ 100
Emerging

This tool helps you understand how well Large Language Models (LLMs) run on your personal computer. It takes a local LLM (like Llama 3) and measures how fast it starts generating text and how quickly it produces words. This is for anyone setting up local AI models who wants to know if their computer can handle the workload and which models will perform best.

Use this if you are running LLMs locally via Ollama and need to benchmark their speed and responsiveness on your specific hardware.

Not ideal if you need to evaluate the accuracy, intelligence, or factual correctness of an LLM's responses, as it focuses purely on speed metrics.

local-AI-deployment LLM-performance hardware-suitability desktop-AI AI-experimentation
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 11 / 25
Community 9 / 25

How are scores calculated?

Stars

18

Forks

2

Language

Go

License

MIT

Last pushed

Feb 24, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/rohanelukurthy/rig-rank"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.