ialhashim/DenseDepth

High Quality Monocular Depth Estimation via Transfer Learning

51
/ 100
Established

This project helps convert standard 2D photos or video frames into detailed depth maps, showing how far away objects are from the camera. It takes a single color image as input and outputs a grayscale depth map or a 3D point cloud reconstruction. Anyone working with 3D computer vision, robotics, or augmented reality applications could use this.

1,605 stars. No commits in the last 6 months.

Use this if you need to quickly and accurately infer the depth information from a single camera image for applications like 3D scene understanding or object interaction.

Not ideal if you require extremely precise, lidar-level depth measurements or if your application cannot tolerate GPU hardware requirements.

3D-reconstruction robotics-perception augmented-reality computer-vision autonomous-driving
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

1,605

Forks

349

Language

Jupyter Notebook

License

GPL-3.0

Last pushed

Dec 07, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ialhashim/DenseDepth"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.