deeflect/mcclaw
Find which local LLMs actually run on your Mac. 340+ models, hardware-aware recommendations.
This tool helps Mac users easily discover which large language models (LLMs) will run smoothly on their specific Mac hardware, without crashing. You input your Mac's chip and RAM, and it provides tailored recommendations for compatible LLMs along with performance estimates. It's designed for anyone on a Mac who wants to experiment with or use local LLMs, from beginners to experienced practitioners.
Use this if you own a Mac and want to run large language models locally but are unsure which ones your hardware can support or which quantization to choose.
Not ideal if you are looking for LLM recommendations for non-Mac hardware, or if you need to compare cloud-based LLM services.
Stars
13
Forks
—
Language
—
License
—
Category
Last pushed
Mar 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/deeflect/mcclaw"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jundot/omlx
LLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the...
josStorer/RWKV-Runner
A RWKV management and startup tool, full automation, only 8MB. And provides an interface...
jordanhubbard/nanolang
A tiny experimental language designed to be targeted by coding LLMs
waybarrios/vllm-mlx
OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models...
akivasolutions/tightwad
Pool your CUDA + ROCm GPUs into one OpenAI-compatible API. Speculative decoding proxy gives you...