World-In-World/world-in-world
Code implementation of the paper "World-in-World: World Models in a Closed-Loop World" (ICLR'26 Oral)
This project offers a standardized way to test how well visual world models help intelligent agents perform real-world tasks like navigating environments, answering questions about what they see, or manipulating objects. It takes a trained visual world model and task data as input, then provides clear metrics on how effectively the model improves the agent's ability to act and perceive within a simulated environment. This is for researchers and engineers developing embodied AI agents and advanced robotics.
139 stars.
Use this if you need a reliable benchmark to measure the practical utility of visual world models for embodied agents, beyond just how realistic their generated images look.
Not ideal if you are looking for a simple, off-the-shelf solution for a specific robotics task, as this is primarily an evaluation framework for advanced AI models.
Stars
139
Forks
2
Language
Python
License
MIT
Category
Last pushed
Feb 15, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/World-In-World/world-in-world"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ZiYang-xie/WorldGen
🌍 WorldGen - Generate Any 3D Scene in Seconds
aioz-ai/AIOZ-GDANCE
AIOZ-GDANCE: a large-scale dataset & baseline for music-driven group dance generation. (CVPR 2023)
worldbench/WorldLens
[CVPR 2026] WorldLens: Full-Spectrum Evaluations of Driving World Models in Real World
Kobaayyy/Awesome-CVPR2026-CVPR2025-ICCV2025-CVPR2024-ECCV2024-AIGC
A Collection of Papers and Codes for CVPR2026/CVPR2025/ICCV2025/CVPR2024/ECCV2024 AIGC
nv-tlabs/XCube
[CVPR 2024 Highlight] XCube: Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies