karoly-hars/DE_resnet_unet_hyb
Depth estimation from RGB images using fully convolutional neural networks.
This project helps you understand the three-dimensional layout of a scene from a single standard photo or video frame. It takes an ordinary color image and outputs a depth map, where each pixel represents how far away that point in the scene is from the camera. This is useful for robotics engineers, autonomous vehicle developers, or anyone needing to infer spatial relationships from 2D visual data.
No commits in the last 6 months.
Use this if you need to quickly generate depth information from individual RGB images or videos, particularly for indoor or urban scenes similar to the NYU Depth v2 dataset.
Not ideal if you require a system that can be trained on your own custom datasets, as the training code is not provided.
Stars
53
Forks
13
Language
Python
License
BSD-3-Clause
Category
Last pushed
Apr 27, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/karoly-hars/DE_resnet_unet_hyb"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vita-epfl/monoloco
A 3D vision library from 2D keypoints: monocular and stereo 3D detection for humans, social...
fangchangma/self-supervised-depth-completion
ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and...
nburrus/stereodemo
Small Python utility to compare and visualize the output of various stereo depth estimation algorithms
JiawangBian/sc_depth_pl
SC-Depth (V1, V2, and V3) for Unsupervised Monocular Depth Estimation ...
wvangansbeke/Sparse-Depth-Completion
Predict dense depth maps from sparse and noisy LiDAR frames guided by RGB images. (Ranked 1st...