dataflowr/llm_efficiency
KV Cache & LoRA for minGPT
This project helps developers working with Large Language Models (LLMs) to make their models run faster and fine-tune more efficiently. It provides implementations of KV Caching to speed up how LLMs generate text, and LoRA (Low-Rank Adaptation) to reduce the cost of adapting pre-trained models to new tasks. If you're building or customizing LLMs, you can use these techniques to optimize performance and resource use.
Use this if you are a machine learning engineer or researcher looking to improve the inference speed or reduce the fine-tuning costs of transformer-based language models.
Not ideal if you are an end-user of an LLM or are looking for a high-level API to interact with pre-trained models without needing to modify their internal workings.
Stars
59
Forks
7
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/dataflowr/llm_efficiency"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
LMCache/LMCache
Supercharge Your LLM with the Fastest KV Cache Layer
Zefan-Cai/KVCache-Factory
Unified KV Cache Compression Methods for Auto-Regressive Models
OnlyTerp/kvtc
First open-source KVTC implementation (NVIDIA, ICLR 2026) -- 8-32x KV cache compression via PCA...
itsnamgyu/block-transformer
Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)
OnlyTerp/turboquant
First open-source implementation of Google TurboQuant (ICLR 2026) -- near-optimal KV cache...