Pavelevich/llm-checker

Advanced CLI tool that scans your hardware and tells you exactly which LLM or sLLM models you can run locally, with full Ollama integration.

71
/ 100
Verified

This tool helps developers and AI enthusiasts efficiently choose which large language models (LLMs) or small language models (sLLMs) they can run directly on their computer. It analyzes your hardware to tell you which models are compatible and performs best, providing a list of recommended models. This helps you get the most out of your local machine for AI tasks, especially when using Ollama.

1,642 stars. Actively maintained with 18 commits in the last 30 days. Available on npm.

Use this if you want to quickly identify the best-performing LLMs for your specific computer hardware, avoiding trial-and-error.

Not ideal if you are looking for an LLM selection tool for cloud deployments or if you are not interested in running models locally via Ollama.

local-AI LLM-deployment hardware-optimization AI-model-selection developer-tools
Maintenance 20 / 25
Adoption 10 / 25
Maturity 24 / 25
Community 17 / 25

How are scores calculated?

Stars

1,642

Forks

105

Language

JavaScript

License

Last pushed

Mar 14, 2026

Commits (30d)

18

Dependencies

10

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Pavelevich/llm-checker"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.