thepradip/turboquant-vllm

Efficient KV Cache Quantization for LLM Inference — 4-bit TurboQuant (PolarQuant + Hadamard), KIVI asymmetric, Bonsai 1-bit Q1_0_g128.

14
/ 100
Experimental
No License No Package No Dependents
Maintenance 13 / 25
Adoption 0 / 25
Maturity 1 / 25
Community 0 / 25

How are scores calculated?

Stars

Forks

Language

Python

License

Last pushed

Apr 04, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/thepradip/turboquant-vllm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.