ShuweiShao/MonoDiffusion
[TCSVT2024] MonoDiffusion: Self-Supervised Monocular Depth Estimation Using Diffusion Model
This project helps convert standard 2D images into detailed 3D depth maps, without needing special equipment or pre-existing 3D data. It takes a single 2D image as input and outputs a corresponding depth map, showing the distance of objects from the camera. This is useful for researchers and engineers working on autonomous vehicles, robotics, or computer vision applications.
No commits in the last 6 months.
Use this if you need to understand the 3D structure of a scene from a single camera image, especially in applications where collecting precise 3D data is difficult or impossible.
Not ideal if you already have access to stereo cameras or LiDAR data, as this tool focuses on inferring depth from monocular (single-camera) input.
Stars
32
Forks
3
Language
Python
License
MIT
Category
Last pushed
Mar 27, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/ShuweiShao/MonoDiffusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fangchangma/sparse-to-dense
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image"...
Aradhye2002/EcoDepth
[CVPR'2024] Official implementation of the paper "ECoDepth: Effective Conditioning of Diffusion...
fangchangma/sparse-to-dense.pytorch
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image"...
albert100121/AiFDepthNet
Official Pytorch implementation of ICCV 2021 2020 paper "Bridging Unsupervised and Supervised...
chen742/DCF
This is the official implementation of "Transferring to Real-World Layouts: A Depth-aware...