ialhashim/DenseDepth
High Quality Monocular Depth Estimation via Transfer Learning
This project helps convert standard 2D photos or video frames into detailed depth maps, showing how far away objects are from the camera. It takes a single color image as input and outputs a grayscale depth map or a 3D point cloud reconstruction. Anyone working with 3D computer vision, robotics, or augmented reality applications could use this.
1,605 stars. No commits in the last 6 months.
Use this if you need to quickly and accurately infer the depth information from a single camera image for applications like 3D scene understanding or object interaction.
Not ideal if you require extremely precise, lidar-level depth measurements or if your application cannot tolerate GPU hardware requirements.
Stars
1,605
Forks
349
Language
Jupyter Notebook
License
GPL-3.0
Category
Last pushed
Dec 07, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ialhashim/DenseDepth"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
cake-lab/HybridDepth
Official implementation for HybridDepth Model [WACV 2025, ISMAR 2024]
soubhiksanyal/RingNet
Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
nianticlabs/monodepth2
[ICCV 2019] Monocular depth estimation from a single image
tinghuiz/SfMLearner
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
tjqansthd/LapDepth-release
Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals