AutoAILab/FusionDepth

Official implementation for paper "Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR"

35
/ 100
Emerging

This project helps autonomous robotics engineers and researchers accurately perceive 3D environments. It takes monocular camera images and sparse LiDAR data (e.g., from a 4-beam sensor) as input to generate highly detailed 3D depth maps. This allows robots to understand the exact distance to every object in their field of view, which is crucial for navigation, obstacle avoidance, and object detection.

No commits in the last 6 months.

Use this if you need to generate highly accurate, dense 3D depth maps from standard camera footage combined with low-cost sparse LiDAR for autonomous systems.

Not ideal if you do not have access to sparse LiDAR data or if your application requires real-time processing on extremely constrained hardware.

autonomous-driving robotics-perception 3D-reconstruction sensor-fusion environmental-sensing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

96

Forks

7

Language

Python

License

MIT

Last pushed

Jul 29, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AutoAILab/FusionDepth"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.