MraDonkey/DMAD

[ICLR 2025] Breaking Mental Set to Improve Reasoning through Diverse Multi-Agent Debate

32
/ 100
Emerging

This project helps anyone using large language models (LLMs) to get more accurate answers, especially for complex problems where the model might get stuck in a repetitive or limited way of thinking. It takes a problem as input and uses a "debate" approach with multiple AI agents, each employing a distinct reasoning style, to produce a refined, optimal solution. This is for researchers and practitioners who build or deploy LLM-powered applications and want to improve their reliability and performance.

No commits in the last 6 months.

Use this if your LLM-powered applications are consistently making reasoning mistakes or failing to find optimal solutions because the model gets stuck in a "mental set."

Not ideal if you need a quick, single-pass answer for straightforward problems where current LLM performance is already satisfactory, or if you don't have the technical resources to set up a multi-agent system.

LLM-reasoning AI-problem-solving cognitive-bias-LLM AI-reliability multi-agent-AI
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

19

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Apr 22, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/MraDonkey/DMAD"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.