zhenyi4/codi
Official repository for "CODI: Compressing Chain-of-Thought into Continuous Space via Self-Distillation"
This project offers a way to distill the complex problem-solving steps of large language models (LLMs) into a more efficient, compressed format. It takes a problem and the LLM's step-by-step reasoning (known as Chain-of-Thought) and produces a smaller, faster model that can achieve similar reasoning capabilities. This is for researchers and engineers working with LLMs who need to make their models more performant and less resource-intensive.
Use this if you are a machine learning researcher or engineer aiming to compress the reasoning capabilities of large language models for efficiency without significant performance loss.
Not ideal if you are an end-user simply looking to apply an existing language model for text generation or analysis without modifying its core architecture or training process.
Stars
73
Forks
13
Language
Python
License
—
Category
Last pushed
Dec 15, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zhenyi4/codi"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
InternLM/SIM-CoT
[ICLR 2026] An official implementation of "SIM-CoT: Supervised Implicit Chain-of-Thought"
xf-zhao/LoT
Official implementation of LoT paper: "Enhancing Zero-Shot Chain-of-Thought Reasoning in Large...
nicolay-r/Reasoning-for-Sentiment-Analysis-Framework
The official code for CoT / ZSL reasoning framework 🧠, utilized in paper: "Large Language Models...
FranxYao/FlanT5-CoT-Specialization
Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.
KomeijiForce/CoTAM
Official Implementation of the ACL2024 Findings paper "Controllable Data Augmentation for...