RouteWorks/RouterArena
RouterArena: An open framework for evaluating LLM routers with standardized datasets, metrics, an automated framework, and a live leaderboard.
This project provides an open platform for evaluating and comparing large language model (LLM) routers. It helps you understand how different routing systems perform in selecting the best LLM for a given query, considering factors like accuracy and cost. The platform takes your router's decisions for various queries and outputs detailed performance metrics and a ranking on a live leaderboard. Anyone developing or integrating LLM routing solutions would use this to benchmark their systems.
Use this if you are building or using LLM routing systems and need a standardized way to measure their effectiveness, efficiency, and cost-performance trade-offs.
Not ideal if you are looking for a tool to build or deploy an LLM router, as this focuses solely on evaluation and benchmarking.
Stars
71
Forks
12
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/RouteWorks/RouterArena"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related agents
StonyBrookNLP/appworld
🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and...
qualifire-dev/rogue
AI Agent Evaluator & Red Team Platform
microsoft/WindowsAgentArena
Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of...
future-agi/ai-evaluation
Evaluation Framework for all your AI related Workflows
dreadnode/AIRTBench-Code
Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models