Skytliang/Multi-Agents-Debate
MAD: The first work to explore Multi-Agent Debate with Large Language Models :D
This project helps people get more accurate answers from Large Language Models (LLMs) by simulating a debate. Instead of a single LLM trying to solve a problem on its own, two LLMs (a 'devil' and an 'angel') argue and correct each other's biases, with a third LLM acting as a judge. This process refines initial outputs, leading to more robust and less 'biased' results, particularly for complex reasoning or translation tasks. It's for anyone using LLMs for critical tasks who needs to minimize errors from a single LLM's 'thinking'.
532 stars.
Use this if you need to improve the reliability and accuracy of answers generated by Large Language Models, especially for tasks requiring nuanced reasoning or translation where a single model might produce biased or incomplete responses.
Not ideal if you need instant, simple answers where the overhead of a multi-agent debate isn't justified, or if you are working with extremely short, factual queries.
Stars
532
Forks
57
Language
Python
License
GPL-3.0
Category
Last pushed
Dec 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Skytliang/Multi-Agents-Debate"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
betagouv/ComparIA
Open source LLM arena created by the French Government
liuxiaotong/ai-dataset-radar
Multi-source async competitive intelligence engine for AI training data ecosystems with...
Arnoldlarry15/ARES-Dashboard
AI Red Team Operations Console
llm-ring/lmring
Open-source, self-hostable LLM arena with model compare, voting, and leaderboards
YerbaPage/SWE-Debate
SWE-Debate: Competitive Multi-Agent Debate for Software Issue Resolution