jandhyala-dev/modelai-llama.cpp
Production fork of llama.cpp adding KV cache compaction via Attention Matching
25
/ 100
Experimental
No Package
No Dependents
Maintenance
13 / 25
Adoption
1 / 25
Maturity
11 / 25
Community
0 / 25
Stars
1
Forks
—
Language
C++
License
MIT
Category
Last pushed
Mar 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jandhyala-dev/modelai-llama.cpp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ModelEngine-Group/unified-cache-management
Persist and reuse KV Cache to speedup your LLM.
58
reloadware/reloadium
Hot Reloading and Profiling for Python
48
October2001/Awesome-KV-Cache-Compression
📰 Must-read papers on KV Cache Compression (constantly updating 🤗).
47
alibaba/tair-kvcache
Alibaba Cloud's high-performance KVCache system for LLM inference, with components for global...
47
Zefan-Cai/Awesome-LLM-KV-Cache
Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.
39