zjunlp/LightThinker

[EMNLP 2025] LightThinker: Thinking Step-by-Step Compression

33
/ 100
Emerging

This project helps make large language models (LLMs) more efficient when they are solving complex problems that require many steps of reasoning. It takes verbose, step-by-step thoughts generated by an LLM and compresses them into shorter, more efficient representations, reducing the computational resources and memory needed. This is for AI researchers or ML engineers working with LLMs who need to improve performance or reduce the cost of running models for complex reasoning tasks.

134 stars. No commits in the last 6 months.

Use this if you are an AI researcher or ML engineer developing or deploying LLMs for tasks that involve extensive, multi-step reasoning and you need to optimize their efficiency and resource usage.

Not ideal if you are a business user or an application developer who needs a ready-to-use API or a simple tool for immediate LLM inference without delving into model training or optimization.

Large Language Models AI Efficiency Natural Language Processing Model Optimization Cognitive AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

134

Forks

5

Language

Python

License

MIT

Last pushed

Apr 12, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zjunlp/LightThinker"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.