sparse-to-dense and sparse-to-dense.pytorch
These are ecosystem siblings, representing two different implementations (Torch and PyTorch) of the the same "Sparse-to-Dense" depth prediction algorithm by the same author, designed for different deep learning frameworks.
About sparse-to-dense
fangchangma/sparse-to-dense
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image" (Torch Implementation)
This project helps robotics engineers, autonomous vehicle developers, or augmented reality creators accurately estimate the depth of objects in a scene. By taking a regular color image and a few sparse depth measurements, it produces a detailed depth map for the entire scene. This is useful for improving spatial awareness in computer vision applications.
About sparse-to-dense.pytorch
fangchangma/sparse-to-dense.pytorch
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image" (PyTorch Implementation)
This project helps robotics engineers and researchers create detailed depth maps from images that only have partial depth information. By taking a standard camera image and a sparse collection of depth measurements (like from LiDAR), it accurately predicts the full depth of every pixel in the scene. This is ideal for applications needing precise 3D understanding from limited sensor data.
Scores updated daily from GitHub, PyPI, and npm data. How scores work