self-supervised-depth-completion and Sparse-Depth-Completion
These are competitors offering alternative approaches to the same task: both perform sparse-to-dense depth completion by fusing LiDAR and monocular RGB data, with the key difference being that fangchangma's method is self-supervised while wvangansbeke's achieves superior KITTI benchmark performance through supervised learning.
About self-supervised-depth-completion
fangchangma/self-supervised-depth-completion
ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera"
This helps autonomous vehicles and robotics perceive their surroundings in fine detail. It takes sparse depth data from a LiDAR sensor and a standard camera image, then fills in the gaps to create a complete, dense depth map. Roboticists and autonomous vehicle engineers can use this to give their systems a more comprehensive understanding of 3D space.
About Sparse-Depth-Completion
wvangansbeke/Sparse-Depth-Completion
Predict dense depth maps from sparse and noisy LiDAR frames guided by RGB images. (Ranked 1st place on KITTI) [MVA 2019]
This project helps self-driving car engineers and researchers transform incomplete and noisy depth information from LiDAR sensors into full, accurate depth maps. It takes in sparse LiDAR point clouds and corresponding RGB camera images, then combines them to output a dense depth map for an entire scene. This is useful for anyone developing or evaluating autonomous navigation systems where precise environmental understanding is critical.
Scores updated daily from GitHub, PyPI, and npm data. How scores work