liziniu/policy_optimization

Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)

31
/ 100
Emerging

This project helps machine learning researchers and practitioners understand how to improve reinforcement learning from human feedback (RLHF) models. It allows you to explore how including data that doesn't fit the usual preferences can make your AI models generalize better. Researchers and engineers working on large language models or other AI systems that learn from human feedback would use this.

No commits in the last 6 months.

Use this if you are a machine learning researcher or practitioner investigating how different data types influence the performance and generalization of RLHF models.

Not ideal if you are looking for a plug-and-play solution for deploying an RLHF model in a production environment without needing to understand the underlying research.

Reinforcement Learning Machine Learning Research AI Model Training Human Feedback Systems Generative AI
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

28

Forks

6

Language

Python

License

Last pushed

Dec 19, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/liziniu/policy_optimization"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.