SWE-bench/swe-bench.github.io
Landing page + leaderboard for SWE-Bench benchmark
This is the official website for SWE-bench, a benchmark that measures how well systems can automatically resolve real-world software bugs found on GitHub. It presents leaderboards showcasing the performance of various automated code-fixing systems. Software engineers, researchers, and AI developers can use this site to compare the efficacy of different models or approaches in tackling GitHub issues, using performance data as input to see system rankings and detailed results.
Use this if you are a researcher or developer who wants to evaluate and compare the performance of different automated software engineering systems on a standardized benchmark.
Not ideal if you are looking for a tool to fix your own software bugs or manage GitHub issues directly, as this site is for benchmarking existing systems.
Stars
12
Forks
15
Language
JavaScript
License
—
Category
Last pushed
Mar 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/SWE-bench/swe-bench.github.io"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
StonyBrookNLP/appworld
🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and...
qualifire-dev/rogue
AI Agent Evaluator & Red Team Platform
microsoft/WindowsAgentArena
Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of...
future-agi/ai-evaluation
Evaluation Framework for all your AI related Workflows
RouteWorks/RouterArena
RouterArena: An open framework for evaluating LLM routers with standardized datasets, metrics,...