dataflowr/llm_efficiency

KV Cache & LoRA for minGPT

41
/ 100
Emerging

This project helps developers working with Large Language Models (LLMs) to make their models run faster and fine-tune more efficiently. It provides implementations of KV Caching to speed up how LLMs generate text, and LoRA (Low-Rank Adaptation) to reduce the cost of adapting pre-trained models to new tasks. If you're building or customizing LLMs, you can use these techniques to optimize performance and resource use.

Use this if you are a machine learning engineer or researcher looking to improve the inference speed or reduce the fine-tuning costs of transformer-based language models.

Not ideal if you are an end-user of an LLM or are looking for a high-level API to interact with pre-trained models without needing to modify their internal workings.

Large Language Models LLM fine-tuning AI model optimization Machine Learning Engineering Deep Learning performance
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 11 / 25
Community 12 / 25

How are scores calculated?

Stars

59

Forks

7

Language

Python

License

Apache-2.0

Last pushed

Mar 04, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/dataflowr/llm_efficiency"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.