jspenmar/monodepth_benchmark

Code for "Deconstructing Monocular Depth Reconstruction: The Design Decisions that Matter" (https://arxiv.org/abs/2208.01489)

41
/ 100
Emerging

This project helps researchers and engineers analyze and understand how different design choices impact the performance of self-supervised monocular depth estimation models. By taking raw image sequences and various model configurations as input, it produces depth maps and performance metrics. This is useful for computer vision researchers, autonomous driving engineers, and anyone working on 3D reconstruction from 2D images.

120 stars. No commits in the last 6 months.

Use this if you need to systematically compare and evaluate different approaches for generating depth information from single camera footage without requiring extensive labeled depth data.

Not ideal if you are looking for a pre-trained, production-ready depth estimation model or if your primary goal is real-time deployment on embedded systems.

computer-vision 3d-reconstruction robotics autonomous-vehicles image-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

120

Forks

16

Language

Python

License

Last pushed

Jul 20, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jspenmar/monodepth_benchmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.