Eclipsess/Awesome-Efficient-Reasoning-LLMs

[TMLR 2025] Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models

41
/ 100
Emerging

This project offers a curated survey to help machine learning researchers and practitioners understand how to make Large Language Models (LLMs) reason more efficiently. It compiles and organizes the latest research, providing a clear overview of different techniques to reduce the computational cost and time required for LLMs to generate complex thought processes. The output is a structured understanding of the field, enabling users to identify relevant research for their work on LLM optimization.

752 stars.

Use this if you are an ML researcher or practitioner looking for a systematic overview of techniques to improve the efficiency of reasoning in Large Language Models.

Not ideal if you are an end-user seeking a ready-to-use LLM application or a developer looking for specific code implementations rather than research insights.

Large Language Models AI Research Machine Learning Efficiency Model Optimization Computational Linguistics
No License No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 13 / 25

How are scores calculated?

Stars

752

Forks

34

Language

License

Last pushed

Feb 28, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Eclipsess/Awesome-Efficient-Reasoning-LLMs"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.