PKU-Alignment/beavertails
BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).
BeaverTails is a collection of datasets designed to help AI safety researchers and developers make large language models (LLMs) safer and more aligned with human values. It provides human-labeled question-answering pairs and preference data that goes into training and evaluating LLMs. The output is a better understanding of how LLMs respond to sensitive queries and how to improve their safety features, particularly useful for AI safety researchers and machine learning engineers.
176 stars. No commits in the last 6 months.
Use this if you are an AI safety researcher or a machine learning engineer working on making large language models respond more safely and ethically.
Not ideal if you are looking for a plug-and-play solution for content moderation or direct application in a business setting without further model training or development.
Stars
176
Forks
6
Language
Makefile
License
Apache-2.0
Category
Last pushed
Oct 27, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PKU-Alignment/beavertails"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.