duanyiqun/DiffusionDepth
PyTorch Implementation of introducing diffusion approach to 3D depth perception ECCV 2024
This project helps convert a standard 2D image into a depth map, which shows how far away objects are from the camera. You provide a single image, and it outputs a corresponding image where pixel brightness represents depth. This is useful for professionals in fields like autonomous driving or robotics who need to understand the 3D structure of a scene from 2D photos.
339 stars.
Use this if you need to precisely estimate the depth of every pixel in a scene using only a single photograph.
Not ideal if you already have multiple camera views or LiDAR data to determine depth, as this tool focuses on inferring depth from a single image.
Stars
339
Forks
23
Language
Python
License
Apache-2.0
Category
Last pushed
Oct 31, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/duanyiqun/DiffusionDepth"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jayin92/Skyfall-GS
Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery
Tencent-Hunyuan/Hunyuan3D-2
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
ActiveVisionLab/gaussctrl
[ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing
caiyuanhao1998/Open-DiffusionGS
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D...
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with...