ZinYY/Online_RLHF

A PyTorch implementation of the paper "Provably Efficient Online RLHF with One-Pass Reward Modeling". This repository provides a flexible and modular approach to Online Reinforcement Learning from Human Feedback (Online RLHF).

42
/ 100
Emerging

This project helps machine learning engineers and researchers fine-tune large language models (LLMs) more effectively using Reinforcement Learning from Human Feedback (RLHF). It takes existing LLMs and human preference data as input, allowing you to iteratively refine the model's responses to better align with desired human preferences. The output is a more capable and human-aligned language model.

Use this if you need to customize an existing large language model to produce responses that are specifically tailored to human preferences or a particular task, with the flexibility to choose different optimization and reward modeling approaches.

Not ideal if you are looking for a pre-trained, ready-to-use language model or a simple API to interact with an LLM, as this tool requires familiarity with model training pipelines and deep learning frameworks.

large-language-models model-fine-tuning AI-alignment natural-language-generation machine-learning-research
No License No Package No Dependents
Maintenance 6 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 19 / 25

How are scores calculated?

Stars

89

Forks

17

Language

Python

License

Last pushed

Dec 13, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ZinYY/Online_RLHF"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.