InternLM/SIM-CoT

[ICLR 2026] An official implementation of "SIM-CoT: Supervised Implicit Chain-of-Thought"

45
/ 100
Emerging

This project offers a training framework called SIM-CoT that helps improve the reasoning abilities of large language models (LLMs) while keeping them efficient. It takes an existing LLM, trains it with additional supervision on its internal reasoning steps, and produces a more accurate and stable LLM. Researchers and AI practitioners working on developing or fine-tuning LLMs for complex problem-solving tasks would use this.

185 stars.

Use this if you are developing or fine-tuning large language models and need to improve their accuracy and stability for complex reasoning tasks without increasing their inference cost.

Not ideal if you are looking for a pre-trained, ready-to-use LLM for end-user applications, as this is a training framework for LLM developers.

LLM development AI research model training natural language processing reasoning systems
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 10 / 25

How are scores calculated?

Stars

185

Forks

10

Language

Python

License

Apache-2.0

Last pushed

Feb 04, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/InternLM/SIM-CoT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.