yangcaoai/CoDA_NeurIPS2023

Official code for NeurIPS2023 paper: CoDA: Collaborative Novel Box Discovery and Cross-modal Alignment for Open-vocabulary 3D Object Detection

40
/ 100
Emerging

This project helps engineers, roboticists, and researchers automatically identify and locate a wide variety of objects in 3D scans of indoor environments, even if the system hasn't been explicitly trained on those specific objects. It takes 3D point cloud data and descriptive text as input, then outputs precise 3D bounding boxes and labels for all detected objects. This is ideal for professionals developing smart environments, autonomous robots, or advanced virtual reality applications.

221 stars. No commits in the last 6 months.

Use this if you need to detect and localize many different types of objects in 3D indoor scenes using point cloud data, especially when you encounter objects that were not part of your initial training data.

Not ideal if your application requires object detection in 2D images, outdoor environments, or exclusively with a fixed, predefined set of object categories.

3D-object-detection robot-vision indoor-mapping open-vocabulary-detection point-cloud-analysis
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

221

Forks

16

Language

Jupyter Notebook

License

MIT

Last pushed

Sep 10, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/yangcaoai/CoDA_NeurIPS2023"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.