YangLing0818/SuperCorrect-llm
[ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction
This project helps developers improve the reasoning and self-correction abilities of smaller Large Language Models (LLMs). It takes a standard LLM as input and, through a specialized fine-tuning process, outputs a more accurate and robust LLM capable of complex problem-solving. This is primarily for AI/ML developers and researchers who are building or fine-tuning LLMs for advanced reasoning tasks.
No commits in the last 6 months.
Use this if you are a developer or researcher looking to significantly enhance the mathematical and logical reasoning capabilities of your smaller LLMs, making them more competitive with larger models.
Not ideal if you are a non-technical end-user simply looking for an off-the-shelf powerful AI model for general use, as this project requires deep technical expertise to implement.
Stars
87
Forks
7
Language
Python
License
—
Category
Last pushed
Mar 23, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/YangLing0818/SuperCorrect-llm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ExtensityAI/symbolicai
A neurosymbolic perspective on LLMs
TIGER-AI-Lab/MMLU-Pro
The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding...
deep-symbolic-mathematics/LLM-SR
[ICLR 2025 Oral] This is the official repo for the paper "LLM-SR" on Scientific Equation...
microsoft/interwhen
A framework for verifiable reasoning with language models.
zhudotexe/fanoutqa
Companion code for FanOutQA: Multi-Hop, Multi-Document Question Answering for Large Language...