YangLing0818/buffer-of-thought-llm
[NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
This project helps anyone who uses large language models (LLMs) to solve complex reasoning problems. It takes a problem as input and then applies a stored, adaptable "thought-template" to generate a more accurate and efficient reasoning process, leading to better solutions. The end user is typically a researcher, data scientist, or an AI practitioner working with LLMs on challenging tasks like advanced math, logical puzzles, or strategic games.
675 stars. No commits in the last 6 months.
Use this if you need your LLM to perform significantly better on complex reasoning tasks, achieving higher accuracy and solving problems more efficiently than standard prompting methods.
Not ideal if you are only using LLMs for simple tasks like basic text generation or summarization where complex reasoning is not required.
Stars
675
Forks
64
Language
Python
License
MIT
Category
Last pushed
Jun 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/YangLing0818/buffer-of-thought-llm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
neo4j/neo4j-graphrag-python
Neo4j GraphRAG for Python
microsoft/graphrag
A modular graph-based Retrieval-Augmented Generation (RAG) system
Hawksight-AI/semantica
Semantica 🧠— A framework for building semantic layers, context graphs, and decision...
FalkorDB/GraphRAG-SDK
Build fast and accurate GenAI apps with GraphRAG SDK at scale.
getzep/graphiti
Build Real-Time Knowledge Graphs for AI Agents