fangchangma/sparse-to-dense

ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image" (Torch Implementation)

50
/ 100
Established

This project helps robotics engineers, autonomous vehicle developers, or augmented reality creators accurately estimate the depth of objects in a scene. By taking a regular color image and a few sparse depth measurements, it produces a detailed depth map for the entire scene. This is useful for improving spatial awareness in computer vision applications.

441 stars. No commits in the last 6 months.

Use this if you need to generate dense depth information from limited depth sensor data combined with standard camera images for applications like robot navigation or 3D scene reconstruction.

Not ideal if you don't have access to a CUDA-enabled GPU or prefer working in a framework other than Torch.

robotics autonomous-driving computer-vision 3D-reconstruction augmented-reality
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 24 / 25

How are scores calculated?

Stars

441

Forks

95

Language

Lua

License

Last pushed

Jul 21, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/fangchangma/sparse-to-dense"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.