WooooDyy/LLM-Reverse-Curriculum-RL

Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" presented by Zhiheng Xi et al.

29
/ 100
Experimental

This project helps machine learning researchers improve how Large Language Models (LLMs) reason and solve complex problems. By applying a "reverse curriculum" reinforcement learning approach, it takes an existing LLM and outputs a new, fine-tuned LLM that is better at tasks requiring step-by-step logical thought, such as mathematical problem-solving or understanding nuanced text. It's designed for AI/ML researchers and practitioners focused on advanced LLM training techniques.

116 stars. No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer looking to enhance the reasoning capabilities of Large Language Models for complex tasks through advanced training methodologies.

Not ideal if you are an end-user simply looking to apply an existing LLM for standard tasks without deep technical involvement in model training and optimization.

Large Language Models Reinforcement Learning AI Research Machine Learning Engineering Natural Language Processing
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 11 / 25

How are scores calculated?

Stars

116

Forks

10

Language

Python

License

Last pushed

Feb 09, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/WooooDyy/LLM-Reverse-Curriculum-RL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.