InternLM/SIM-CoT
[ICLR 2026] An official implementation of "SIM-CoT: Supervised Implicit Chain-of-Thought"
This project offers a training framework called SIM-CoT that helps improve the reasoning abilities of large language models (LLMs) while keeping them efficient. It takes an existing LLM, trains it with additional supervision on its internal reasoning steps, and produces a more accurate and stable LLM. Researchers and AI practitioners working on developing or fine-tuning LLMs for complex problem-solving tasks would use this.
185 stars.
Use this if you are developing or fine-tuning large language models and need to improve their accuracy and stability for complex reasoning tasks without increasing their inference cost.
Not ideal if you are looking for a pre-trained, ready-to-use LLM for end-user applications, as this is a training framework for LLM developers.
Stars
185
Forks
10
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/InternLM/SIM-CoT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
zhenyi4/codi
Official repository for "CODI: Compressing Chain-of-Thought into Continuous Space via Self-Distillation"
xf-zhao/LoT
Official implementation of LoT paper: "Enhancing Zero-Shot Chain-of-Thought Reasoning in Large...
nicolay-r/Reasoning-for-Sentiment-Analysis-Framework
The official code for CoT / ZSL reasoning framework 🧠, utilized in paper: "Large Language Models...
FranxYao/FlanT5-CoT-Specialization
Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.
KomeijiForce/CoTAM
Official Implementation of the ACL2024 Findings paper "Controllable Data Augmentation for...