squishai/squish
🤖🗜️⚡️ Compress local LLMs once, run them forever at sub-second load times. OpenAI + Ollama drop-in for Apple Silicon — statistically identical accuracy, 54× faster cold starts.
Stars
2
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 15, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/squishai/squish"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
radlab-dev-group/llm-router
LLM Router is a service that can be deployed on‑premises or in the cloud. It adds a layer...
yonahgraphics/openevalkit
Production-grade Python framework for evaluating LLM and agentic systems with traditional...
Aryan-202/cookbooks
An intelligent optimization engine that dynamically adjusts LLM selection, context size, and...
wesleyscholl/squish
🤖🗜️⚡️ Compress local LLMs once, run them forever at sub-second load times. OpenAI + Ollama...
Yu-amd/Multiverse
Lightweight model inference playground