YangLing0818/buffer-of-thought-llm

[NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models

46
/ 100
Emerging

This project helps anyone who uses large language models (LLMs) to solve complex reasoning problems. It takes a problem as input and then applies a stored, adaptable "thought-template" to generate a more accurate and efficient reasoning process, leading to better solutions. The end user is typically a researcher, data scientist, or an AI practitioner working with LLMs on challenging tasks like advanced math, logical puzzles, or strategic games.

675 stars. No commits in the last 6 months.

Use this if you need your LLM to perform significantly better on complex reasoning tasks, achieving higher accuracy and solving problems more efficiently than standard prompting methods.

Not ideal if you are only using LLMs for simple tasks like basic text generation or summarization where complex reasoning is not required.

LLM-reasoning problem-solving AI-research cognitive-AI advanced-analytics
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

675

Forks

64

Language

Python

License

MIT

Last pushed

Jun 28, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/YangLing0818/buffer-of-thought-llm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.