ksm26/Reinforcement-Learning-from-Human-Feedback

Embark on the "Reinforcement Learning from Human Feedback" course and align Large Language Models (LLMs) with human values.

30
/ 100
Emerging

This course helps AI developers and researchers align Large Language Models (LLMs) with human values and preferences. It teaches how to take an LLM and human feedback on different outputs, then train the model to produce responses that humans prefer. The primary users are individuals responsible for developing and refining AI models to ensure ethical and relevant outputs.

No commits in the last 6 months.

Use this if you need to train a Large Language Model (LLM) to better reflect human preferences and values, moving beyond basic fine-tuning.

Not ideal if you are not working with Large Language Models or if you are looking for a pre-trained, ready-to-use solution rather than a training methodology.

AI-model-alignment Generative-AI-development LLM-fine-tuning Ethical-AI AI-research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 17 / 25

How are scores calculated?

Stars

12

Forks

9

Language

Jupyter Notebook

License

Last pushed

Jan 31, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ksm26/Reinforcement-Learning-from-Human-Feedback"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.