holarissun/RewardModelingBeyondBradleyTerry
official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and Alternatives
This project helps AI researchers and practitioners develop and test reward models for large language models. It provides a way to train and evaluate reward models using pre-generated embedding data, which dramatically reduces the need for expensive GPUs. Input is a dataset of language model responses and their associated quality annotations, and the output is a trained reward model that can assess response quality efficiently.
No commits in the last 6 months.
Use this if you are an AI researcher or practitioner looking to conduct reward modeling research for large language models without needing high-end GPUs for training and evaluation.
Not ideal if you are looking to generate new response data or annotations from scratch, as those steps still require significant computational resources like GPUs.
Stars
71
Forks
5
Language
Python
License
MIT
Category
Last pushed
Apr 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/holarissun/RewardModelingBeyondBradleyTerry"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.