YerbaPage/SWE-Debate
SWE-Debate: Competitive Multi-Agent Debate for Software Issue Resolution
This helps software developers quickly pinpoint the exact code locations responsible for a reported bug or issue. It takes an issue description as input and uses an AI-powered debate process among virtual agents to identify relevant code entities, dependency paths, and ultimately, generates a detailed plan for code modifications. Software engineers and QA teams can use this to streamline their debugging and problem-solving workflows.
Use this if you need an automated, robust way to find the root cause and generate potential fixes for software issues within a large codebase.
Not ideal if you are working with small, simple scripts or require a purely manual, human-driven debugging process.
Stars
25
Forks
2
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/YerbaPage/SWE-Debate"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
betagouv/ComparIA
Open source LLM arena created by the French Government
Skytliang/Multi-Agents-Debate
MAD: The first work to explore Multi-Agent Debate with Large Language Models :D
liuxiaotong/ai-dataset-radar
Multi-source async competitive intelligence engine for AI training data ecosystems with...
Arnoldlarry15/ARES-Dashboard
AI Red Team Operations Console
llm-ring/lmring
Open-source, self-hostable LLM arena with model compare, voting, and leaderboards