Zefan-Cai/Awesome-LLM-KV-Cache

Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.

39
/ 100
Emerging

This is a curated list of research papers and associated codebases focused on optimizing the Key-Value (KV) cache in large language models (LLMs). It helps AI researchers and practitioners stay up-to-date with the latest advancements in LLM inference efficiency. You get a categorized list of academic papers, often with links to their code, and insight into different strategies for managing KV caches.

417 stars. No commits in the last 6 months.

Use this if you are an AI researcher or machine learning engineer actively working on improving the performance and efficiency of large language model inference and want to explore the latest techniques in KV cache management.

Not ideal if you are looking for an off-the-shelf tool or library to directly use in a non-research LLM application.

AI-research LLM-inference model-optimization deep-learning-efficiency natural-language-processing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

417

Forks

26

Language

License

GPL-3.0

Last pushed

Mar 03, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Zefan-Cai/Awesome-LLM-KV-Cache"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.