xuyige/SoftCoT

ACL'2025: SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. and preprint: SoftCoT++: Test-Time Scaling with Soft Chain-of-Thought Reasoning

44
/ 100
Emerging

This project helps AI researchers and practitioners improve the reasoning capabilities and efficiency of large language models (LLMs). It takes an input prompt and an LLM, and outputs more accurate and robust answers by generating 'soft thoughts' that guide the LLM's reasoning process. This is for professionals who are developing or deploying LLM-powered applications and need to enhance their models' performance on complex tasks.

No commits in the last 6 months.

Use this if you are working with large language models and want to enhance their ability to reason through problems, especially in areas like mathematical problem-solving, without experiencing 'catastrophic forgetting' of prior knowledge.

Not ideal if you are looking for a simple, out-of-the-box solution for general LLM usage without delving into advanced model training and evaluation techniques.

Large Language Models AI Reasoning Model Efficiency Natural Language Processing Deep Learning Research
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

81

Forks

14

Language

Python

License

Last pushed

May 30, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/xuyige/SoftCoT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.