HKUDS/LightReasoner
"LightReasoner: Can Small Language Models Teach Large Language Models Reasoning?"
LightReasoner helps AI researchers and developers improve the reasoning abilities of large language models (LLMs) more efficiently. It takes existing large and small language models and a dataset of problems, then outputs a more accurate and faster-performing large language model. This is for anyone working on fine-tuning or developing LLMs, especially in academic or research settings.
594 stars.
Use this if you want to enhance your large language model's reasoning capabilities, particularly for tasks like mathematical problem-solving, while drastically reducing the time and computational resources typically required for fine-tuning.
Not ideal if you are looking for a pre-trained, off-the-shelf model without engaging in any custom fine-tuning or model development.
Stars
594
Forks
31
Language
Python
License
MIT
Category
Last pushed
Nov 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/HKUDS/LightReasoner"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cvs-health/uqlm
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM...
PRIME-RL/TTRL
[NeurIPS 2025] TTRL: Test-Time Reinforcement Learning
sapientinc/HRM
Hierarchical Reasoning Model Official Release
tigerchen52/query_level_uncertainty
query-level uncertainty in LLMs
reasoning-survey/Awesome-Reasoning-Foundation-Models
✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models