worldbench/3EED

[NeurIPS 2025 DB Track] 3EED: Ground Everything Everywhere in 3D

42
/ 100
Emerging

This project offers a comprehensive dataset and tools for training AI models to understand and 'ground' language instructions within 3D environments. It takes in 3D data from vehicles, drones, or quadruped robots, along with images and text descriptions, and outputs precise 3D locations (bounding boxes) of described objects. Robotics engineers, autonomous system developers, and researchers in 3D scene understanding will find this useful for developing robust navigation and interaction systems.

206 stars.

Use this if you are developing AI models that need to locate specific objects in complex real-world 3D environments using natural language commands, especially across diverse robotic platforms.

Not ideal if you are looking for a pre-built, plug-and-play solution for object detection that doesn't require training or fine-tuning advanced 3D grounding models.

robotics autonomous-navigation 3D-scene-understanding multi-modal-AI semantic-grounding
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 11 / 25

How are scores calculated?

Stars

206

Forks

13

Language

Python

License

Apache-2.0

Last pushed

Dec 26, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/worldbench/3EED"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.