fangchangma/self-supervised-depth-completion

ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera"

51
/ 100
Established

This helps autonomous vehicles and robotics perceive their surroundings in fine detail. It takes sparse depth data from a LiDAR sensor and a standard camera image, then fills in the gaps to create a complete, dense depth map. Roboticists and autonomous vehicle engineers can use this to give their systems a more comprehensive understanding of 3D space.

650 stars. No commits in the last 6 months.

Use this if you need to generate detailed, full-scene depth maps for your robots or self-driving cars using readily available sensor data.

Not ideal if you're not working with LiDAR and monocular camera inputs, or if you need depth information for non-automotive/robotics applications.

autonomous-vehicles robotics 3D-perception depth-estimation sensor-fusion
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

650

Forks

134

Language

Python

License

MIT

Last pushed

Apr 24, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/fangchangma/self-supervised-depth-completion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.