alexklwong/calibrated-backprojection-network

PyTorch Implementation of Unsupervised Depth Completion with Calibrated Backprojection Layers (ORAL, ICCV 2021)

45
/ 100
Emerging

This project helps convert sparse 3D point cloud data, often obtained from sensors like LIDAR or Structure-from-Motion, into a complete and dense 3D representation. It takes an image, a sparse point cloud (or sparse depth map), and camera calibration parameters as input, and outputs a refined, dense point cloud. This is useful for robotics engineers, autonomous vehicle developers, or anyone working with 3D scene understanding where detailed environmental mapping is critical.

129 stars. No commits in the last 6 months.

Use this if you need to reliably reconstruct full 3D scenes from limited, scattered depth measurements and corresponding images, even when using different sensor platforms.

Not ideal if your application requires depth estimation without any initial sparse point cloud data or if you primarily work with single 2D images.

3D-reconstruction robotics autonomous-vehicles computer-vision scene-understanding
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

129

Forks

24

Language

Python

License

Last pushed

Oct 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/alexklwong/calibrated-backprojection-network"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.