wvangansbeke/Sparse-Depth-Completion

Predict dense depth maps from sparse and noisy LiDAR frames guided by RGB images. (Ranked 1st place on KITTI) [MVA 2019]

47
/ 100
Emerging

This project helps self-driving car engineers and researchers transform incomplete and noisy depth information from LiDAR sensors into full, accurate depth maps. It takes in sparse LiDAR point clouds and corresponding RGB camera images, then combines them to output a dense depth map for an entire scene. This is useful for anyone developing or evaluating autonomous navigation systems where precise environmental understanding is critical.

506 stars. No commits in the last 6 months.

Use this if you need to create detailed and precise depth maps for autonomous vehicles or robotics using combined LiDAR and standard camera inputs.

Not ideal if your application requires a commercial license, as this software is currently restricted to personal and research use.

autonomous-driving robotics 3D-reconstruction sensor-fusion environmental-perception
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

506

Forks

77

Language

Python

License

Last pushed

May 01, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/wvangansbeke/Sparse-Depth-Completion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.