microsoft/WindowsAgentArena
Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of multi-modal AI agents.
This platform helps AI researchers and developers evaluate how well their multi-modal AI agents can perform tasks on a real Windows operating system. It takes your developed AI agents and a set of diverse Windows tasks as input, and outputs comprehensive benchmark results showing the agents' performance. Researchers and developers working on AI agentic workflows would find this tool valuable for rigorous testing.
833 stars.
Use this if you are an AI researcher or developer who needs to thoroughly test and benchmark the capabilities of your multi-modal AI agents in a realistic, scalable Windows environment.
Not ideal if you are an end-user looking for a pre-built AI agent to solve personal computing tasks.
Stars
833
Forks
92
Language
Python
License
MIT
Category
Last pushed
Feb 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/microsoft/WindowsAgentArena"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related agents
StonyBrookNLP/appworld
🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and...
qualifire-dev/rogue
AI Agent Evaluator & Red Team Platform
future-agi/ai-evaluation
Evaluation Framework for all your AI related Workflows
RouteWorks/RouterArena
RouterArena: An open framework for evaluating LLM routers with standardized datasets, metrics,...
dreadnode/AIRTBench-Code
Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models