helgklaizar/turboquant_mlx

Extreme KV Cache Compression (1-3 bit) for LLMs natively on Apple Silicon (MLX). Features TurboQuant, asymmetric PolarQuant caching, and OpenAI server compatibility.

25
/ 100
Experimental

This project helps developers working with large language models (LLMs) on Apple Silicon Macs. It takes an existing LLM, such as Llama 3 or Gemma 2, and significantly reduces its memory footprint during operation. The output is the same LLM, but consuming much less VRAM, allowing for longer text generations or the use of larger models on Apple's M-series chips. This is ideal for machine learning engineers, AI researchers, or developers building AI applications on Apple hardware.

Use this if you are running large language models on an Apple Silicon Mac and frequently encounter out-of-memory errors or want to process much longer text sequences efficiently.

Not ideal if you are not using Apple Silicon, or if your LLM application does not involve extensive text generation and memory is not a bottleneck.

LLM-development Apple-Silicon-optimization AI-application-development Machine-Learning-engineering
No License No Package No Dependents
Maintenance 13 / 25
Adoption 6 / 25
Maturity 1 / 25
Community 5 / 25

How are scores calculated?

Stars

15

Forks

1

Language

Python

License

Last pushed

Mar 25, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/helgklaizar/turboquant_mlx"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.