liziniu/ReMax
Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)
This project offers a method for improving large language models (LLMs) to better align with human preferences. It takes an existing large language model and human feedback data, then uses reinforcement learning to produce a fine-tuned model that generates more desirable responses. AI researchers, machine learning engineers, and data scientists working on developing or deploying LLMs would use this.
201 stars. No commits in the last 6 months.
Use this if you need to fine-tune a large language model to be more helpful, harmless, or align with specific quality criteria, especially when GPU memory or training time is a constraint.
Not ideal if you are not working with large language models or do not have access to GPU resources for model training.
Stars
201
Forks
14
Language
Python
License
—
Category
Last pushed
Dec 16, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/liziniu/ReMax"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.