Aradhye2002/EcoDepth

[CVPR'2024] Official implementation of the paper "ECoDepth: Effective Conditioning of Diffusion Models for Monocular Depth Estimation"

46
/ 100
Emerging

This tool helps researchers and computer vision engineers determine the distance of objects in a scene from a single 2D image or video frame. It takes a standard color image as input and outputs a detailed depth map, where each pixel represents how far away that part of the scene is. This is ideal for anyone working on scene understanding, robotics, or augmented reality applications.

206 stars.

Use this if you need to generate accurate depth maps from single images, especially for indoor (like NYUv2) or outdoor (like KITTI) environments, and want to leverage state-of-the-art diffusion models.

Not ideal if you require real-time processing on very resource-constrained devices without GPU acceleration or if you have access to specialized multi-sensor setups for depth sensing.

monocular-depth-estimation computer-vision robotics augmented-reality scene-understanding
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

206

Forks

21

Language

Python

License

Last pushed

Nov 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Aradhye2002/EcoDepth"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.