michael-fonder/M4Depth

Official implementation of the network presented in the paper "Parallax Inference for Robust Temporal Monocular Depth Estimation in Unstructured Environments"

41
/ 100
Emerging

This project helps engineers working with autonomous systems or robotics estimate the depth of objects in a scene from standard video footage. It takes a sequence of RGB images from a moving camera and outputs a detailed depth map, showing how far away objects are. This is useful for anyone needing to understand 3D space from 2D video, like developers of self-driving cars or drone navigation systems.

No commits in the last 6 months.

Use this if you need real-time, accurate depth estimation from a single camera's video feed in complex, unpredictable environments, and you have limited GPU memory.

Not ideal if your application requires depth from static images or relies on stereo cameras or LiDAR for depth sensing.

robotics autonomous-vehicles drone-navigation computer-vision spatial-awareness
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

93

Forks

15

Language

Python

License

AGPL-3.0

Last pushed

Jun 13, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/michael-fonder/M4Depth"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.