andreaconti/sparsity-agnostic-depth-completion

Depth Completion technique agnostic to input depth pattern sparsity, WACV23

19
/ 100
Experimental

This tool helps autonomous vehicle developers, robotics engineers, and 3D reconstruction specialists fill in incomplete 3D depth information from sensors like LiDAR or depth cameras. You input a sparse depth map (a 2D image where most pixels have no depth data) and a corresponding color image, and it outputs a complete, dense depth map. This is especially useful for applications where sensor readings might be very sparse or inconsistent.

No commits in the last 6 months.

Use this if you need to generate accurate, full depth maps from highly sparse or variably dense sensor inputs for applications like autonomous navigation or 3D scene understanding.

Not ideal if your depth inputs are consistently dense and complete, or if you need to process only standard, evenly distributed depth data.

autonomous-vehicles robotics 3d-reconstruction computer-vision sensor-fusion
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 4 / 25

How are scores calculated?

Stars

33

Forks

1

Language

Python

License

Last pushed

Nov 23, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/andreaconti/sparsity-agnostic-depth-completion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.