worldbench/3EED
[NeurIPS 2025 DB Track] 3EED: Ground Everything Everywhere in 3D
This project offers a comprehensive dataset and tools for training AI models to understand and 'ground' language instructions within 3D environments. It takes in 3D data from vehicles, drones, or quadruped robots, along with images and text descriptions, and outputs precise 3D locations (bounding boxes) of described objects. Robotics engineers, autonomous system developers, and researchers in 3D scene understanding will find this useful for developing robust navigation and interaction systems.
206 stars.
Use this if you are developing AI models that need to locate specific objects in complex real-world 3D environments using natural language commands, especially across diverse robotic platforms.
Not ideal if you are looking for a pre-built, plug-and-play solution for object detection that doesn't require training or fine-tuning advanced 3D grounding models.
Stars
206
Forks
13
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 26, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/worldbench/3EED"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
col14m/cadrille
[ICLR2026] cadrille: Multi-modal CAD Reconstruction with Online Reinforcement Learning
filaPro/cad-recode
[ICCV2025] CAD-Recode: Reverse Engineering CAD Code from Point Clouds
pengsongyou/openscene
[CVPR'23] OpenScene: 3D Scene Understanding with Open Vocabularies
cambrian-mllm/cambrian-s
Cambrian-S: Towards Spatial Supersensing in Video
Gorilla-Lab-SCUT/PaDT
[ICLR 2026] Official implementation of "Patch-as-Decodable-Token: Towards Unified Multi-Modal...