tinghuiz/SfMLearner

An unsupervised learning framework for depth and ego-motion estimation from monocular videos

51
/ 100
Established

This project helps robotics engineers and autonomous vehicle developers understand their environment by estimating scene depth and camera motion from standard video. It takes a monocular video sequence as input and outputs a depth map for each frame, showing how far away objects are, and the camera's movement between frames. This is useful for building self-navigating systems.

2,014 stars. No commits in the last 6 months.

Use this if you need to determine the 3D structure of a scene and how a camera moves through it, using only standard video footage without specialized depth sensors.

Not ideal if you require real-time performance on embedded systems or need a solution that runs outside of the TensorFlow 1.0 ecosystem.

robotics navigation autonomous vehicles computer vision 3D scene reconstruction visual odometry
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

2,014

Forks

555

Language

Jupyter Notebook

License

MIT

Last pushed

Oct 26, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tinghuiz/SfMLearner"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.