wvangansbeke/Sparse-Depth-Completion
Predict dense depth maps from sparse and noisy LiDAR frames guided by RGB images. (Ranked 1st place on KITTI) [MVA 2019]
This project helps self-driving car engineers and researchers transform incomplete and noisy depth information from LiDAR sensors into full, accurate depth maps. It takes in sparse LiDAR point clouds and corresponding RGB camera images, then combines them to output a dense depth map for an entire scene. This is useful for anyone developing or evaluating autonomous navigation systems where precise environmental understanding is critical.
506 stars. No commits in the last 6 months.
Use this if you need to create detailed and precise depth maps for autonomous vehicles or robotics using combined LiDAR and standard camera inputs.
Not ideal if your application requires a commercial license, as this software is currently restricted to personal and research use.
Stars
506
Forks
77
Language
Python
License
—
Category
Last pushed
May 01, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/wvangansbeke/Sparse-Depth-Completion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vita-epfl/monoloco
A 3D vision library from 2D keypoints: monocular and stereo 3D detection for humans, social...
fangchangma/self-supervised-depth-completion
ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and...
nburrus/stereodemo
Small Python utility to compare and visualize the output of various stereo depth estimation algorithms
JiawangBian/sc_depth_pl
SC-Depth (V1, V2, and V3) for Unsupervised Monocular Depth Estimation ...
antocad/FocusOnDepth
A Monocular depth-estimation for in-the-wild AutoFocus application.