xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
This project helps evaluate how well AI agents can perform open-ended, complex tasks on a real computer, just like a human would. It takes an AI agent and a description of a task, then assesses how accurately and efficiently the agent completes the task within various operating systems. This is used by AI researchers and developers who are building or improving AI agents that need to interact with computers.
2,664 stars. Actively maintained with 19 commits in the last 30 days.
Use this if you are developing AI agents and need a rigorous, standardized way to test their ability to complete real-world tasks across different operating systems like Ubuntu or Windows.
Not ideal if you are looking for a tool to automate specific tasks for personal or business use, as this is a benchmark and development environment for AI agents.
Stars
2,664
Forks
411
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
19
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/xlang-ai/OSWorld"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related tools
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems
swefficiency/swefficiency
Benchmark harness and code for "SWE-fficiency: Can Language Models Optimize Real World...