complex-reasoning/RPG
[ICLR 2026] RPG: KL-Regularized Policy Gradient (https://arxiv.org/abs/2505.17508)
This project provides tools for researchers and AI developers to refine large language models (LLMs) for complex reasoning tasks, specifically in mathematics. It takes structured math problem datasets as input and outputs a fine-tuned LLM capable of better step-by-step problem-solving. This is designed for those working on advancing LLM capabilities for accurate, explainable reasoning.
Use this if you are an AI researcher or developer aiming to improve an LLM's ability to solve intricate mathematical problems and require a systematic framework for applying KL-regularized policy gradient methods.
Not ideal if you are a non-technical end-user looking for an out-of-the-box math problem solver or if you do not have significant computational resources and expertise in reinforcement learning for LLMs.
Stars
65
Forks
3
Language
Python
License
MIT
Category
Last pushed
Feb 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/complex-reasoning/RPG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.