liziniu/policy_optimization
Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)
This project helps machine learning researchers and practitioners understand how to improve reinforcement learning from human feedback (RLHF) models. It allows you to explore how including data that doesn't fit the usual preferences can make your AI models generalize better. Researchers and engineers working on large language models or other AI systems that learn from human feedback would use this.
No commits in the last 6 months.
Use this if you are a machine learning researcher or practitioner investigating how different data types influence the performance and generalization of RLHF models.
Not ideal if you are looking for a plug-and-play solution for deploying an RLHF model in a production environment without needing to understand the underlying research.
Stars
28
Forks
6
Language
Python
License
—
Category
Last pushed
Dec 19, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/liziniu/policy_optimization"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.