zhiyuanhubj/UoT
[NeurIPS 2024] Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models
This project helps professionals like medical diagnosticians, technical support staff, or game players solve complex problems by asking smarter questions. It takes an initial problem description or set of symptoms and guides a large language model to ask the most effective follow-up questions to pinpoint the correct diagnosis, solution, or answer. The output is a more efficient and accurate problem-solving process, reducing the number of questions needed to reach a successful conclusion.
106 stars. No commits in the last 6 months.
Use this if you need a large language model to intelligently seek information by asking probing questions, rather than making assumptions, to solve open-ended problems.
Not ideal if your problem space is very clearly defined and requires a fixed, exhaustive set of questions, or if you prefer a model to directly provide an answer without interactive questioning.
Stars
106
Forks
8
Language
Python
License
—
Category
Last pushed
Aug 05, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zhiyuanhubj/UoT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-thought/reasoning-gym
[NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards
Hmbown/Hegelion
Dialectical reasoning architecture for LLMs (Thesis → Antithesis → Synthesis)
LLM360/Reasoning360
A repo for open research on building large reasoning models
bowang-lab/BioReason
BioReason: Incentivizing Multimodal Biological Reasoning within a DNA-LLM Model | NeurIPS '25
TsinghuaC3I/Awesome-RL-for-LRMs
A Survey of Reinforcement Learning for Large Reasoning Models