nottelabs/open-operator-evals
Opensource benchmark evaluating web operators/agents performance
This project helps evaluate how well different web agents or 'operators' perform common online tasks, such as booking travel or finding information on websites. It takes various web agents as input and produces a detailed performance report, including success rates, completion times, and reliability metrics. Anyone developing, deploying, or deciding on which web automation agent to use can utilize this to objectively compare their effectiveness.
No commits in the last 6 months.
Use this if you need to compare the real-world performance of different web automation agents or evaluate how well your own agent navigates and completes tasks on various websites.
Not ideal if you are looking for a tool to build or train web agents, as this project focuses solely on benchmarking existing ones.
Stars
47
Forks
7
Language
Python
License
—
Category
Last pushed
Apr 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/nottelabs/open-operator-evals"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
StonyBrookNLP/appworld
🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and...
qualifire-dev/rogue
AI Agent Evaluator & Red Team Platform
microsoft/WindowsAgentArena
Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of...
future-agi/ai-evaluation
Evaluation Framework for all your AI related Workflows
agentscope-ai/OpenJudge
OpenJudge: A Unified Framework for Holistic Evaluation and Quality Rewards