October2001/Awesome-KV-Cache-Compression

๐Ÿ“ฐ Must-read papers on KV Cache Compression (constantly updating ๐Ÿค—).

47
/ 100
Emerging

This resource provides a curated collection of research papers and projects focused on optimizing the memory usage of Large Language Models (LLMs). It gathers various techniques to make LLMs run more efficiently, specifically by managing their 'KV Cache' โ€“ a memory component crucial for generating responses. This helps AI researchers and practitioners identify and implement methods to reduce the computational demands and costs associated with deploying and operating LLMs.

668 stars.

Use this if you are a researcher, engineer, or practitioner working with Large Language Models and want to understand or implement methods to reduce their memory footprint and improve inference efficiency.

Not ideal if you are looking for a plug-and-play software solution or a general introduction to LLMs without a technical background in their architecture and optimization.

Large Language Models LLM Optimization AI Inference Natural Language Processing Deep Learning Efficiency
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

668

Forks

22

Language

License

MIT

Last pushed

Feb 24, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/October2001/Awesome-KV-Cache-Compression"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.