Pavelevich/llm-checker
Advanced CLI tool that scans your hardware and tells you exactly which LLM or sLLM models you can run locally, with full Ollama integration.
This tool helps developers and AI enthusiasts efficiently choose which large language models (LLMs) or small language models (sLLMs) they can run directly on their computer. It analyzes your hardware to tell you which models are compatible and performs best, providing a list of recommended models. This helps you get the most out of your local machine for AI tasks, especially when using Ollama.
1,642 stars. Actively maintained with 18 commits in the last 30 days. Available on npm.
Use this if you want to quickly identify the best-performing LLMs for your specific computer hardware, avoiding trial-and-error.
Not ideal if you are looking for an LLM selection tool for cloud deployments or if you are not interested in running models locally via Ollama.
Stars
1,642
Forks
105
Language
JavaScript
License
—
Category
Last pushed
Mar 14, 2026
Commits (30d)
18
Dependencies
10
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Pavelevich/llm-checker"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
aircodelabs/llms-txt-generator
The ultimate AI-powered generator for llms.txt and llms-full.txt files.
mrkhachaturov/mkdocs-ask-ai
MkDocs plugin for AI-ready documentation — Use with AI dropdown, llms.txt, MCP server, i18n support
OppieAI/ToolsFilter
Fetch only relevant tools for the current conversation and save cost while increasing the...
adistrim/gollama
chat interface for extreme vibe coding