karoly-hars/DE_resnet_unet_hyb

Depth estimation from RGB images using fully convolutional neural networks.

42
/ 100
Emerging

This project helps you understand the three-dimensional layout of a scene from a single standard photo or video frame. It takes an ordinary color image and outputs a depth map, where each pixel represents how far away that point in the scene is from the camera. This is useful for robotics engineers, autonomous vehicle developers, or anyone needing to infer spatial relationships from 2D visual data.

No commits in the last 6 months.

Use this if you need to quickly generate depth information from individual RGB images or videos, particularly for indoor or urban scenes similar to the NYU Depth v2 dataset.

Not ideal if you require a system that can be trained on your own custom datasets, as the training code is not provided.

robotics computer vision autonomous navigation 3D reconstruction scene understanding
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

53

Forks

13

Language

Python

License

BSD-3-Clause

Last pushed

Apr 27, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/karoly-hars/DE_resnet_unet_hyb"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.