jspenmar/monodepth_benchmark
Code for "Deconstructing Monocular Depth Reconstruction: The Design Decisions that Matter" (https://arxiv.org/abs/2208.01489)
This project helps researchers and engineers analyze and understand how different design choices impact the performance of self-supervised monocular depth estimation models. By taking raw image sequences and various model configurations as input, it produces depth maps and performance metrics. This is useful for computer vision researchers, autonomous driving engineers, and anyone working on 3D reconstruction from 2D images.
120 stars. No commits in the last 6 months.
Use this if you need to systematically compare and evaluate different approaches for generating depth information from single camera footage without requiring extensive labeled depth data.
Not ideal if you are looking for a pre-trained, production-ready depth estimation model or if your primary goal is real-time deployment on embedded systems.
Stars
120
Forks
16
Language
Python
License
—
Category
Last pushed
Jul 20, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jspenmar/monodepth_benchmark"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cake-lab/HybridDepth
Official implementation for HybridDepth Model [WACV 2025, ISMAR 2024]
ialhashim/DenseDepth
High Quality Monocular Depth Estimation via Transfer Learning
soubhiksanyal/RingNet
Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
nianticlabs/monodepth2
[ICCV 2019] Monocular depth estimation from a single image
tinghuiz/SfMLearner
An unsupervised learning framework for depth and ego-motion estimation from monocular videos