ZinYY/Online_RLHF
A PyTorch implementation of the paper "Provably Efficient Online RLHF with One-Pass Reward Modeling". This repository provides a flexible and modular approach to Online Reinforcement Learning from Human Feedback (Online RLHF).
This project helps machine learning engineers and researchers fine-tune large language models (LLMs) more effectively using Reinforcement Learning from Human Feedback (RLHF). It takes existing LLMs and human preference data as input, allowing you to iteratively refine the model's responses to better align with desired human preferences. The output is a more capable and human-aligned language model.
Use this if you need to customize an existing large language model to produce responses that are specifically tailored to human preferences or a particular task, with the flexibility to choose different optimization and reward modeling approaches.
Not ideal if you are looking for a pre-trained, ready-to-use language model or a simple API to interact with an LLM, as this tool requires familiarity with model training pipelines and deep learning frameworks.
Stars
89
Forks
17
Language
Python
License
—
Category
Last pushed
Dec 13, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ZinYY/Online_RLHF"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.