fangchangma/sparse-to-dense.pytorch

ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image" (PyTorch Implementation)

42
/ 100
Emerging

This project helps robotics engineers and researchers create detailed depth maps from images that only have partial depth information. By taking a standard camera image and a sparse collection of depth measurements (like from LiDAR), it accurately predicts the full depth of every pixel in the scene. This is ideal for applications needing precise 3D understanding from limited sensor data.

452 stars. No commits in the last 6 months.

Use this if you need to generate dense depth maps for robotics, autonomous vehicles, or 3D scene reconstruction from a combination of standard images and sparse depth sensor readings.

Not ideal if you don't have any sparse depth measurements and only want to predict depth from a single color image, or if you require real-time inference on low-power embedded systems.

robotics autonomous-vehicles 3D-reconstruction computer-vision depth-sensing
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 24 / 25

How are scores calculated?

Stars

452

Forks

99

Language

Python

License

Last pushed

Apr 01, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/fangchangma/sparse-to-dense.pytorch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.