PKU-Alignment/safe-rlhf

Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback

51
/ 100
Established

This project helps AI developers and researchers train large language models (LLMs) to be both helpful and harmless. It takes human-labeled preference data about model responses and outputs an LLM that adheres to safety constraints while still being useful. The ideal end-user is an AI researcher or machine learning engineer focused on aligning LLM behavior with human values.

1,590 stars.

Use this if you need to build or fine-tune an LLM that rigorously avoids generating harmful content while maximizing helpfulness, using advanced safe reinforcement learning techniques.

Not ideal if you are looking for a simple, out-of-the-box LLM for general use without specific safety alignment needs or if you lack machine learning expertise.

AI-safety LLM-fine-tuning responsible-AI natural-language-processing machine-learning-engineering
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

1,590

Forks

131

Language

Python

License

Apache-2.0

Last pushed

Nov 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PKU-Alignment/safe-rlhf"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.