haoliuhl/chain-of-hindsight
Simple next-token-prediction for RLHF
This project helps machine learning engineers fine-tune large language models (LLMs) to better align with human preferences. You input raw human feedback data, like preferred dialogue responses or summarized texts, and it outputs a more aligned LLM. It's designed for researchers or practitioners working on improving language model behavior and safety.
229 stars. No commits in the last 6 months.
Use this if you need to train a language model using human feedback to guide its behavior, producing outputs that are more helpful, honest, or harmless.
Not ideal if you are looking for a pre-trained, ready-to-use language model or a tool for general text generation without specific alignment goals.
Stars
229
Forks
17
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 30, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/haoliuhl/chain-of-hindsight"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hud-evals/hud-python
OSS RL environment + evals toolkit
hiyouga/EasyR1
EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL
OpenRL-Lab/openrl
Unified Reinforcement Learning Framework
sail-sg/oat
🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning,...
opendilab/awesome-RLHF
A curated list of reinforcement learning with human feedback resources (continually updated)