haozheji/exact-optimization
ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment
This project helps machine learning engineers improve how language models respond to user prompts, ensuring outputs better align with human preferences. It takes a pre-trained language model and either human-curated preference data or reward model scores, then produces a fine-tuned model that generates more desirable text. This is for developers or researchers who build and deploy large language models and want to refine their behavior.
No commits in the last 6 months.
Use this if you need to fine-tune a language model so its generated text is more aligned with specific human preferences, using an efficient optimization method.
Not ideal if you are looking for a no-code solution or a tool for general-purpose language model deployment without focusing on preference alignment.
Stars
56
Forks
1
Language
Python
License
MIT
Category
Last pushed
Jun 16, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/haozheji/exact-optimization"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.