Aradhye2002/EcoDepth
[CVPR'2024] Official implementation of the paper "ECoDepth: Effective Conditioning of Diffusion Models for Monocular Depth Estimation"
This tool helps researchers and computer vision engineers determine the distance of objects in a scene from a single 2D image or video frame. It takes a standard color image as input and outputs a detailed depth map, where each pixel represents how far away that part of the scene is. This is ideal for anyone working on scene understanding, robotics, or augmented reality applications.
206 stars.
Use this if you need to generate accurate depth maps from single images, especially for indoor (like NYUv2) or outdoor (like KITTI) environments, and want to leverage state-of-the-art diffusion models.
Not ideal if you require real-time processing on very resource-constrained devices without GPU acceleration or if you have access to specialized multi-sensor setups for depth sensing.
Stars
206
Forks
21
Language
Python
License
—
Category
Last pushed
Nov 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Aradhye2002/EcoDepth"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fangchangma/sparse-to-dense
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image"...
fangchangma/sparse-to-dense.pytorch
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image"...
ShuweiShao/MonoDiffusion
[TCSVT2024] MonoDiffusion: Self-Supervised Monocular Depth Estimation Using Diffusion Model
albert100121/AiFDepthNet
Official Pytorch implementation of ICCV 2021 2020 paper "Bridging Unsupervised and Supervised...
chen742/DCF
This is the official implementation of "Transferring to Real-World Layouts: A Depth-aware...