PKU-Alignment/beavertails

BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).

34
/ 100
Emerging

BeaverTails is a collection of datasets designed to help AI safety researchers and developers make large language models (LLMs) safer and more aligned with human values. It provides human-labeled question-answering pairs and preference data that goes into training and evaluating LLMs. The output is a better understanding of how LLMs respond to sensitive queries and how to improve their safety features, particularly useful for AI safety researchers and machine learning engineers.

176 stars. No commits in the last 6 months.

Use this if you are an AI safety researcher or a machine learning engineer working on making large language models respond more safely and ethically.

Not ideal if you are looking for a plug-and-play solution for content moderation or direct application in a business setting without further model training or development.

AI Safety Large Language Models Ethical AI Content Moderation Machine Learning Research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

176

Forks

6

Language

Makefile

License

Apache-2.0

Last pushed

Oct 27, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PKU-Alignment/beavertails"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.