li-plus/flash-preference

Accelerate LLM preference tuning via prefix sharing with a single line of code

26
/ 100
Experimental

This tool helps machine learning engineers accelerate the process of fine-tuning large language models (LLMs) based on human preferences. By efficiently sharing common parts of input sequences, it speeds up both forward and backward passes during training. This is ideal for ML engineers who are working with techniques like Direct Preference Optimization (DPO) or Reward Modeling.

No commits in the last 6 months.

Use this if you are an ML engineer training LLMs with preference data and want to significantly reduce computation time and memory usage without compromising model accuracy.

Not ideal if you are not directly involved in training or fine-tuning large language models, or if your tasks do not involve preference-based learning.

LLM-fine-tuning Reinforcement-Learning-from-Human-Feedback model-optimization deep-learning-training
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

51

Forks

Language

Python

License

MIT

Last pushed

Jul 04, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/li-plus/flash-preference"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.