Awesome-LLM-Reasoning and Awesome-LLM-reasoning-papers
These are complements—one curates practical reasoning tools and frameworks (A) while the other compiles the underlying academic papers and benchmarks that inform those implementations (B), so researchers and practitioners would use both together to understand reasoning from theory to application.
About Awesome-LLM-Reasoning
atfortes/Awesome-LLM-Reasoning
From Chain-of-Thought prompting to OpenAI o1 and DeepSeek-R1 🍓
This collection helps AI researchers and practitioners explore how to improve the reasoning abilities of Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs). It compiles academic papers and resources on techniques like Chain-of-Thought prompting, analysis of reasoning performance, and methods for scaling LLMs. Researchers, data scientists, and AI engineers working on advanced LLM applications would use this to understand the current state-of-the-art and challenges in making LLMs 'think' more effectively.
About Awesome-LLM-reasoning-papers
Oznake/Awesome-LLM-reasoning-papers
This repository offers a well-organized collection of resources focused on reasoning in Large Language Models (LLMs). Explore foundational papers, evaluation benchmarks, and practical tools to enhance your understanding of LLM reasoning. 🐙🌐
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work