Awesome-LLM-KV-Cache and Awesome-KV-Cache-Management
These are **competitors**, as both projects aim to be a curated list of research papers and code links related to KV cache optimization in LLMs, requiring users to choose one over the other for their primary resource.
About Awesome-LLM-KV-Cache
Zefan-Cai/Awesome-LLM-KV-Cache
Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.
This is a curated list of research papers and associated codebases focused on optimizing the Key-Value (KV) cache in large language models (LLMs). It helps AI researchers and practitioners stay up-to-date with the latest advancements in LLM inference efficiency. You get a categorized list of academic papers, often with links to their code, and insight into different strategies for managing KV caches.
About Awesome-KV-Cache-Management
TreeAI-Lab/Awesome-KV-Cache-Management
This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding code links.
This project is for developers who work with Large Language Models (LLMs) and need to improve their performance, particularly regarding memory usage and speed. It collects and categorizes research papers on "KV Cache Management," which is a technique to optimize how LLMs process information. The output is a curated list of research papers and their code, helping developers find methods to make their LLMs run faster and more efficiently.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work