complex-reasoning/RPG

[ICLR 2026] RPG: KL-Regularized Policy Gradient (https://arxiv.org/abs/2505.17508)

39
/ 100
Emerging

This project provides tools for researchers and AI developers to refine large language models (LLMs) for complex reasoning tasks, specifically in mathematics. It takes structured math problem datasets as input and outputs a fine-tuned LLM capable of better step-by-step problem-solving. This is designed for those working on advancing LLM capabilities for accurate, explainable reasoning.

Use this if you are an AI researcher or developer aiming to improve an LLM's ability to solve intricate mathematical problems and require a systematic framework for applying KL-regularized policy gradient methods.

Not ideal if you are a non-technical end-user looking for an out-of-the-box math problem solver or if you do not have significant computational resources and expertise in reinforcement learning for LLMs.

LLM fine-tuning mathematical reasoning AI research reinforcement learning natural language processing
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 15 / 25
Community 6 / 25

How are scores calculated?

Stars

65

Forks

3

Language

Python

License

MIT

Last pushed

Feb 19, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/complex-reasoning/RPG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.