haoliuhl/chain-of-hindsight

Simple next-token-prediction for RLHF

39
/ 100
Emerging

This project helps machine learning engineers fine-tune large language models (LLMs) to better align with human preferences. You input raw human feedback data, like preferred dialogue responses or summarized texts, and it outputs a more aligned LLM. It's designed for researchers or practitioners working on improving language model behavior and safety.

229 stars. No commits in the last 6 months.

Use this if you need to train a language model using human feedback to guide its behavior, producing outputs that are more helpful, honest, or harmless.

Not ideal if you are looking for a pre-trained, ready-to-use language model or a tool for general text generation without specific alignment goals.

Language Model Alignment Reinforcement Learning from Human Feedback (RLHF) LLM Fine-tuning AI Safety Conversational AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

229

Forks

17

Language

Python

License

Apache-2.0

Last pushed

Sep 30, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/haoliuhl/chain-of-hindsight"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.