deeflect/mcclaw

Find which local LLMs actually run on your Mac. 340+ models, hardware-aware recommendations.

21
/ 100
Experimental

This tool helps Mac users easily discover which large language models (LLMs) will run smoothly on their specific Mac hardware, without crashing. You input your Mac's chip and RAM, and it provides tailored recommendations for compatible LLMs along with performance estimates. It's designed for anyone on a Mac who wants to experiment with or use local LLMs, from beginners to experienced practitioners.

Use this if you own a Mac and want to run large language models locally but are unsure which ones your hardware can support or which quantization to choose.

Not ideal if you are looking for LLM recommendations for non-Mac hardware, or if you need to compare cloud-based LLM services.

macOS computing local AI deployment LLM selection personal AI workflow AI experimentation
No License No Package No Dependents
Maintenance 13 / 25
Adoption 5 / 25
Maturity 3 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

License

Last pushed

Mar 18, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/deeflect/mcclaw"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.