clam004/minichatgpt

annotated tutorial of the huggingface TRL repo for reinforcement learning from human feedback connecting equations from PPO and GAE to the lines of code in the pytorch implementation

22
/ 100
Experimental

This project helps machine learning engineers understand how to train language models to complete sentences with a desired sentiment, similar to how ChatGPT learns. It takes in a base language model and scores assigned to its generated text, then outputs a refined model capable of producing text aligned with those scores. This is for AI/ML practitioners looking to implement reinforcement learning from human feedback (RLHF) for text generation tasks.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher trying to understand the practical implementation of Reinforcement Learning from Human Feedback (RLHF) for training generative language models.

Not ideal if you are looking for a plug-and-play solution to deploy a chatbot or if you do not have a strong background in machine learning and deep learning concepts.

natural-language-processing reinforcement-learning generative-AI large-language-models sentiment-analysis
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

20

Forks

2

Language

Jupyter Notebook

License

Last pushed

Apr 04, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/clam004/minichatgpt"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.