xlang-ai/OSWorld

[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments

66
/ 100
Established

This project helps evaluate how well AI agents can perform open-ended, complex tasks on a real computer, just like a human would. It takes an AI agent and a description of a task, then assesses how accurately and efficiently the agent completes the task within various operating systems. This is used by AI researchers and developers who are building or improving AI agents that need to interact with computers.

2,664 stars. Actively maintained with 19 commits in the last 30 days.

Use this if you are developing AI agents and need a rigorous, standardized way to test their ability to complete real-world tasks across different operating systems like Ubuntu or Windows.

Not ideal if you are looking for a tool to automate specific tasks for personal or business use, as this is a benchmark and development environment for AI agents.

AI agent evaluation Human-computer interaction simulation Operating system automation AI model benchmarking Robotic process automation (RPA) development
No Package No Dependents
Maintenance 17 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

2,664

Forks

411

Language

Python

License

Apache-2.0

Last pushed

Mar 12, 2026

Commits (30d)

19

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/xlang-ai/OSWorld"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.