tatsu-lab/alpaca_farm

A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.

42
/ 100
Emerging

This tool helps researchers and developers working on AI models that learn from human feedback. It provides a way to simulate how people would rate different AI responses, eliminating the need for costly and slow human data collection. The input is pairs of AI-generated text responses, and the output is simulated preferences, indicating which response is better. This is designed for AI researchers and machine learning engineers developing advanced language models.

842 stars. No commits in the last 6 months.

Use this if you are developing or experimenting with methods for training AI models to follow instructions more effectively, particularly those involving learning from user preferences, and want to do so quickly and cost-effectively without real human data.

Not ideal if you need to deploy a production-ready model that has been validated with real human feedback, as this is a research simulation tool.

AI research large language models instruction following model alignment machine learning engineering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

842

Forks

63

Language

Python

License

Apache-2.0

Last pushed

Jul 01, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/tatsu-lab/alpaca_farm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.