Software-Engineering-Arena/SWE-Chatbot-Arena

Compare chatbots pairwise via multi‑round evaluations for SE tasks.

23
/ 100
Experimental

This tool helps software engineers evaluate large language models (LLMs) specifically for real-world software engineering tasks. You provide an SE task, optionally with a repository URL, and receive responses from two anonymous LLMs. After comparing their multi-round interactions, you vote for the one that performs better on activities like debugging, code review, or refactoring.

Use this if you need to compare different LLMs to see which one performs best on iterative software engineering workflows and understands repository context.

Not ideal if you are looking to evaluate general-purpose chatbot capabilities unrelated to coding or software development tasks.

software-engineering LLM-evaluation code-review debugging developer-tools
No License No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

Python

License

Last pushed

Feb 24, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Software-Engineering-Arena/SWE-Chatbot-Arena"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.