CLAIRE-Labo/quantile-reward-policy-optimization

Official codebase for "Quantile Reward Policy Optimization: Alignment with Pointwise Regression and Exact Partition Functions" (Matrenok et al. 2025).

34
/ 100
Emerging

This project provides the research codebase for 'Quantile Reward Policy Optimization,' a method for aligning large language models with human preferences using pointwise regression. It takes in model log probabilities, reference log probabilities, and corresponding rewards, and outputs an optimized loss function for training. Scientists and machine learning researchers working on advanced LLM alignment would use this.

Use this if you are a machine learning researcher developing or implementing novel reward-based policy optimization algorithms for large language models.

Not ideal if you are looking for a plug-and-play solution for fine-tuning LLMs without deep involvement in the underlying algorithmic research.

large-language-models reinforcement-learning-from-human-feedback policy-optimization machine-learning-research model-alignment
No Package No Dependents
Maintenance 6 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 6 / 25

How are scores calculated?

Stars

30

Forks

2

Language

Python

License

MIT

Last pushed

Dec 08, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/CLAIRE-Labo/quantile-reward-policy-optimization"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.