Chihaya-Yuka/Multiplex-CoT
[arXiv 2501.13117]The Multiplex CoT makes AI more thoughtful.
This helps AI developers and researchers make their large language models (LLMs) think more carefully when solving problems. You provide the LLM with a task, and it will generate a step-by-step thought process, then review its own thinking to produce a more accurate and logical final answer. This is for AI practitioners focused on improving LLM reasoning without extensive retraining.
No commits in the last 6 months.
Use this if you are working with large language models and need them to produce more robust, reflective, and accurate reasoning for complex tasks.
Not ideal if you are an end-user simply looking for a ready-to-use application, as this is a method for improving the underlying AI model's thought process.
Stars
19
Forks
1
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Feb 09, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/Chihaya-Yuka/Multiplex-CoT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kyegomez/tree-of-thoughts
Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large...
spcl/graph-of-thoughts
Official Implementation of "Graph of Thoughts: Solving Elaborate Problems with Large Language Models"
kyegomez/the-compiler
Seed, Code, Harvest: Grow Your Own App with Tree of Thoughts!
atfortes/Awesome-LLM-Reasoning
From Chain-of-Thought prompting to OpenAI o1 and DeepSeek-R1 🍓
habedi/cogitator
A Python toolkit for chain-of-thought prompting 🐍