cake-lab/HybridDepth
Official implementation for HybridDepth Model [WACV 2025, ISMAR 2024]
HybridDepth helps practitioners accurately measure object distances in a scene using a series of images captured at different focus settings. It takes a collection of focal stack images as input and generates a detailed depth map, showing the precise distance of each point in the scene. This is useful for researchers and professionals working with computer vision, 3D reconstruction, or augmented reality applications.
173 stars.
Use this if you need highly accurate, robust depth perception from camera images and can capture multiple images at varying focus.
Not ideal if you only have a single image, as this method relies on focal stack inputs for its superior accuracy.
Stars
173
Forks
20
Language
Jupyter Notebook
License
GPL-3.0
Category
Last pushed
Feb 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/cake-lab/HybridDepth"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
soubhiksanyal/RingNet
Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
nianticlabs/monodepth2
[ICCV 2019] Monocular depth estimation from a single image
tinghuiz/SfMLearner
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
ialhashim/DenseDepth
High Quality Monocular Depth Estimation via Transfer Learning
tjqansthd/LapDepth-release
Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals