tinghuiz/SfMLearner
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
This project helps robotics engineers and autonomous vehicle developers understand their environment by estimating scene depth and camera motion from standard video. It takes a monocular video sequence as input and outputs a depth map for each frame, showing how far away objects are, and the camera's movement between frames. This is useful for building self-navigating systems.
2,014 stars. No commits in the last 6 months.
Use this if you need to determine the 3D structure of a scene and how a camera moves through it, using only standard video footage without specialized depth sensors.
Not ideal if you require real-time performance on embedded systems or need a solution that runs outside of the TensorFlow 1.0 ecosystem.
Stars
2,014
Forks
555
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Oct 26, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tinghuiz/SfMLearner"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
cake-lab/HybridDepth
Official implementation for HybridDepth Model [WACV 2025, ISMAR 2024]
soubhiksanyal/RingNet
Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
nianticlabs/monodepth2
[ICCV 2019] Monocular depth estimation from a single image
ialhashim/DenseDepth
High Quality Monocular Depth Estimation via Transfer Learning
tjqansthd/LapDepth-release
Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals