Software-Engineering-Arena/SWE-Model-Arena
Compare tool-calling models pairwise via multi‑round evaluations for SE tasks.
14
/ 100
Experimental
No License
No Package
No Dependents
Maintenance
10 / 25
Adoption
1 / 25
Maturity
3 / 25
Community
0 / 25
Stars
1
Forks
—
Language
Python
License
—
Category
Last pushed
Feb 24, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/Software-Engineering-Arena/SWE-Model-Arena"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
strands-agents/evals
A comprehensive evaluation framework for AI agents and LLM applications.
53
eve-mas/eve-parity
Equilibrium Verification Environment (EVE) is a formal verification tool for the automated...
37
usestrix/benchmarks
Evaluation harness for Strix agent
34
KazKozDev/murmur
A Mix of Agents Orchestration System for Distributed LLM Processing
21
tanvirbhachu/ai-bench
A CLI benchmark runner for testing AI Models quickly.
20