fangchangma/sparse-to-dense
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image" (Torch Implementation)
This project helps robotics engineers, autonomous vehicle developers, or augmented reality creators accurately estimate the depth of objects in a scene. By taking a regular color image and a few sparse depth measurements, it produces a detailed depth map for the entire scene. This is useful for improving spatial awareness in computer vision applications.
441 stars. No commits in the last 6 months.
Use this if you need to generate dense depth information from limited depth sensor data combined with standard camera images for applications like robot navigation or 3D scene reconstruction.
Not ideal if you don't have access to a CUDA-enabled GPU or prefer working in a framework other than Torch.
Stars
441
Forks
95
Language
Lua
License
—
Category
Last pushed
Jul 21, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/fangchangma/sparse-to-dense"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
Aradhye2002/EcoDepth
[CVPR'2024] Official implementation of the paper "ECoDepth: Effective Conditioning of Diffusion...
fangchangma/sparse-to-dense.pytorch
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image"...
ShuweiShao/MonoDiffusion
[TCSVT2024] MonoDiffusion: Self-Supervised Monocular Depth Estimation Using Diffusion Model
albert100121/AiFDepthNet
Official Pytorch implementation of ICCV 2021 2020 paper "Bridging Unsupervised and Supervised...
chen742/DCF
This is the official implementation of "Transferring to Real-World Layouts: A Depth-aware...