sparkle-reasoning/sparkle

[NeurIPS'25] Beyond Accuracy: Dissecting Mathematical Reasoning for LLMs Under Reinforcement Learning

27
/ 100
Experimental

This framework helps AI researchers and practitioners improve how large language models (LLMs) solve complex mathematical problems. By feeding LLMs specifically structured hard math problems with partial solutions, it teaches them to reason more effectively. The outcome is an LLM that can better understand, plan, and execute steps for solving math challenges, making it useful for those developing or deploying AI for quantitative tasks.

Use this if you are a machine learning researcher or engineer focused on advancing LLM capabilities for mathematical problem-solving through reinforcement learning.

Not ideal if you are a general user looking for a ready-to-use LLM for basic math or if you are not familiar with training and fine-tuning large language models.

mathematical-reasoning large-language-models reinforcement-learning AI-research model-training
No Package No Dependents
Maintenance 6 / 25
Adoption 6 / 25
Maturity 15 / 25
Community 0 / 25

How are scores calculated?

Stars

16

Forks

Language

Python

License

MIT

Last pushed

Dec 12, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sparkle-reasoning/sparkle"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.