ModelEngine-Group/unified-cache-management

Persist and reuse KV Cache to speedup your LLM.

58
/ 100
Established

This helps AI engineers and machine learning practitioners optimize the performance and efficiency of their Large Language Models (LLMs), especially for tasks involving long text sequences or multi-turn dialogues. It takes existing LLM configurations and, by intelligently managing the KV Cache, outputs significantly reduced inference latency and more flexible resource usage. It's designed for professionals deploying and managing LLM-powered applications.

261 stars.

Use this if you are running LLMs and frequently encounter high latency or memory constraints, particularly with long context windows or many interactive turns.

Not ideal if you are working with smaller models or short, single-turn prompts where cache management is not a primary bottleneck.

LLM deployment AI inference optimization NLP engineering resource management conversational AI
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 23 / 25

How are scores calculated?

Stars

261

Forks

66

Language

Python

License

MIT

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ModelEngine-Group/unified-cache-management"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.