Eclipsess/Awesome-Efficient-Reasoning-LLMs
[TMLR 2025] Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
This project offers a curated survey to help machine learning researchers and practitioners understand how to make Large Language Models (LLMs) reason more efficiently. It compiles and organizes the latest research, providing a clear overview of different techniques to reduce the computational cost and time required for LLMs to generate complex thought processes. The output is a structured understanding of the field, enabling users to identify relevant research for their work on LLM optimization.
752 stars.
Use this if you are an ML researcher or practitioner looking for a systematic overview of techniques to improve the efficiency of reasoning in Large Language Models.
Not ideal if you are an end-user seeking a ready-to-use LLM application or a developer looking for specific code implementations rather than research insights.
Stars
752
Forks
34
Language
—
License
—
Category
Last pushed
Feb 28, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Eclipsess/Awesome-Efficient-Reasoning-LLMs"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cvs-health/uqlm
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM...
PRIME-RL/TTRL
[NeurIPS 2025] TTRL: Test-Time Reinforcement Learning
sapientinc/HRM
Hierarchical Reasoning Model Official Release
tigerchen52/query_level_uncertainty
query-level uncertainty in LLMs
reasoning-survey/Awesome-Reasoning-Foundation-Models
✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models