tlc4418/llm_optimization
A repo for RLHF training and BoN over LLMs, with support for reward model ensembles.
This project helps AI researchers and practitioners refine Large Language Models (LLMs) to produce better, more aligned responses. It takes an LLM and a dataset of desired responses or preferences, then outputs an optimized LLM or a reward model that can be used to improve an existing LLM's performance. The primary users are researchers focused on developing and evaluating advanced LLMs.
No commits in the last 6 months.
Use this if you are a machine learning researcher or engineer working on fine-tuning LLMs and want to explore methods like reward model ensembles or best-of-n inference to mitigate overoptimization.
Not ideal if you are looking for an out-of-the-box solution to apply an LLM to a specific business problem without deep engagement in model training and evaluation.
Stars
47
Forks
6
Language
Python
License
MIT
Category
Last pushed
Jan 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/tlc4418/llm_optimization"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.