Direct Preference Optimization LLM Tools
Methods and implementations for training LLMs through preference learning without explicit reward models, including DPO variants, reference-free approaches, and token-level optimization techniques. Does NOT include general RLHF, reward model training, or non-preference-based fine-tuning approaches.
There are 8 direct preference optimization tools tracked. The highest-rated is codelion/pts at 41/100 with 146 stars.
Get all 8 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=llm-tools&subcategory=direct-preference-optimization&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Tool | Score | Tier |
|---|---|---|---|
| 1 |
codelion/pts
Pivotal Token Search |
|
Emerging |
| 2 |
DtYXs/Pre-DPO
Pre-DPO: Improving Data Utilization in Direct Preference Optimization Using... |
|
Emerging |
| 3 |
RLHFlow/Directional-Preference-Alignment
Directional Preference Alignment |
|
Emerging |
| 4 |
dannylee1020/openpo
Building synthetic data for preference tuning |
|
Emerging |
| 5 |
pspdada/Uni-DPO
[ICLR 2026] Official repository of "Uni-DPO: A Unified Paradigm for Dynamic... |
|
Experimental |
| 6 |
liushunyu/awesome-direct-preference-optimization
A Survey of Direct Preference Optimization (DPO) |
|
Experimental |
| 7 |
ikun-llm/ikun-DPO
偏好对齐训练 | Direct Preference Optimization 👍👎 |
|
Experimental |
| 8 |
Anirvan-Krishna/safety-alignment-of-gpt2
A comparative study of Proximal Policy Optimization (PPO) for RLHF and... |
|
Experimental |