peterstratton/Volume-DROID

Implementation of Volume-DROID

21
/ 100
Experimental

This project helps robots and autonomous systems understand their surroundings by building a detailed 3D map. It takes live camera feeds, either from a single camera or a stereo pair, and produces a semantic 3D map of the environment. Roboticists, researchers in computer vision, and developers of autonomous vehicles would find this useful for real-time spatial awareness.

No commits in the last 6 months.

Use this if you need to create an online, real-time 3D semantic map of an environment using only camera input for applications like robotics navigation or augmented reality.

Not ideal if you require highly precise, centimeter-level accuracy for industrial metrology or need to process pre-recorded datasets offline without a real-time constraint.

robotics autonomous-navigation 3d-mapping computer-vision semantic-segmentation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 5 / 25

How are scores calculated?

Stars

43

Forks

2

Language

Python

License

Last pushed

Jun 13, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/peterstratton/Volume-DROID"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.