AlexsJones/llmfit

Hundreds of models & providers. One command to find what runs on your hardware.

69
/ 100
Established

This tool helps you quickly figure out which large language models (LLMs) will run effectively on your specific computer hardware. You input your hardware specifications (RAM, CPU, GPU), and it provides a ranked list of models, showing their estimated performance, memory usage, and how well they fit your system. This is for anyone looking to run LLMs locally on their own machine, from researchers experimenting with models to developers building local AI applications.

15,685 stars and 4,266 monthly downloads. Actively maintained with 157 commits in the last 30 days.

Use this if you want to find the best-performing LLMs for your computer's exact specifications without guessing or trial-and-error.

Not ideal if you plan to use cloud-based LLM services or if you are not concerned with optimizing model performance on your local hardware.

AI-experimentation local-AI-deployment model-selection hardware-optimization machine-learning-engineering
No Package No Dependents
Maintenance 22 / 25
Adoption 18 / 25
Maturity 11 / 25
Community 18 / 25

How are scores calculated?

Stars

15,685

Forks

875

Language

Rust

License

MIT

Last pushed

Mar 13, 2026

Monthly downloads

4,266

Commits (30d)

157

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/AlexsJones/llmfit"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.