PKU-Alignment/safe-rlhf
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
This project helps AI developers and researchers train large language models (LLMs) to be both helpful and harmless. It takes human-labeled preference data about model responses and outputs an LLM that adheres to safety constraints while still being useful. The ideal end-user is an AI researcher or machine learning engineer focused on aligning LLM behavior with human values.
1,590 stars.
Use this if you need to build or fine-tune an LLM that rigorously avoids generating harmful content while maximizing helpfulness, using advanced safe reinforcement learning techniques.
Not ideal if you are looking for a simple, out-of-the-box LLM for general use without specific safety alignment needs or if you lack machine learning expertise.
Stars
1,590
Forks
131
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PKU-Alignment/safe-rlhf"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.