tatsu-lab/alpaca_farm
A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.
This tool helps researchers and developers working on AI models that learn from human feedback. It provides a way to simulate how people would rate different AI responses, eliminating the need for costly and slow human data collection. The input is pairs of AI-generated text responses, and the output is simulated preferences, indicating which response is better. This is designed for AI researchers and machine learning engineers developing advanced language models.
842 stars. No commits in the last 6 months.
Use this if you are developing or experimenting with methods for training AI models to follow instructions more effectively, particularly those involving learning from user preferences, and want to do so quickly and cost-effectively without real human data.
Not ideal if you need to deploy a production-ready model that has been validated with real human feedback, as this is a research simulation tool.
Stars
842
Forks
63
Language
Python
License
Apache-2.0
Category
Last pushed
Jul 01, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/tatsu-lab/alpaca_farm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.