Janghyun1230/FastKVzip
Accurate and fast KV cache compression with a gating mechanism
This project helps large language model (LLM) operators make their models run faster and more efficiently, especially during the "thinking" and response generation phases. By intelligently compressing the model's memory (KV cache), it allows the model to process information with significantly less memory while maintaining high accuracy. The primary users are MLOps engineers, model deployers, and researchers who manage and optimize LLM inference.
Use this if you are deploying or running large language models on NVIDIA GPUs and need to reduce memory usage and increase inference speed without sacrificing performance.
Not ideal if you are working with smaller language models, CPU-only environments, or if fine-grained control over individual token importance is not a priority.
Stars
13
Forks
—
Language
Python
License
—
Category
Last pushed
Feb 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Janghyun1230/FastKVzip"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ModelEngine-Group/unified-cache-management
Persist and reuse KV Cache to speedup your LLM.
reloadware/reloadium
Hot Reloading and Profiling for Python
October2001/Awesome-KV-Cache-Compression
📰 Must-read papers on KV Cache Compression (constantly updating 🤗).
alibaba/tair-kvcache
Alibaba Cloud's high-performance KVCache system for LLM inference, with components for global...
Zefan-Cai/Awesome-LLM-KV-Cache
Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.