sail-sg/dice

Official implementation of Bootstrapping Language Models via DPO Implicit Rewards

33
/ 100
Emerging

This project helps machine learning researchers and engineers enhance existing large language models (LLMs) by making them more aligned with desired behaviors. It takes a pre-trained LLM and a dataset of preferences, then uses an implicit reward model to generate improved versions of the LLM. The output is a more capable and aligned LLM that can be deployed for various applications.

No commits in the last 6 months.

Use this if you are developing or fine-tuning large language models and want to improve their alignment and performance beyond standard DPO training.

Not ideal if you are looking for an out-of-the-box solution for end-user applications or do not have access to substantial GPU resources for training.

large-language-models model-fine-tuning AI-model-development LLM-alignment machine-learning-research
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

47

Forks

3

Language

Python

License

MIT

Last pushed

Apr 15, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sail-sg/dice"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.