RLHFlow/RLHF-Reward-Modeling

Recipes to train reward model for RLHF.

46
/ 100
Emerging

This project provides methods to train a 'reward model' for large language models (LLMs), which learns human preferences by comparing model responses. You input pairs of responses to a prompt, indicating which one is preferred, and the output is a model that can then score future LLM responses. This is for AI researchers and machine learning engineers who are developing or fine-tuning LLMs.

1,520 stars. No commits in the last 6 months.

Use this if you are building an advanced LLM and need to train a robust reward model that accurately reflects human preferences to guide its behavior, or if you want to experiment with state-of-the-art reward modeling techniques.

Not ideal if you are looking for a pre-trained, ready-to-use LLM without needing to customize its preference alignment, or if you lack machine learning development experience.

LLM alignment Reinforcement Learning from Human Feedback Generative AI AI research Model fine-tuning
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

1,520

Forks

108

Language

Python

License

Apache-2.0

Last pushed

Apr 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/RLHFlow/RLHF-Reward-Modeling"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.