nianticlabs/implicit-depth
[CVPR 2023] Virtual Occlusions Through Implicit Depth
This project helps create realistic 3D scene compositions by precisely estimating depth from images, even when objects overlap. It takes an input image sequence and, optionally, basic depth maps (like those from AR models) to produce detailed depth information. This is useful for researchers and developers working on virtual objects in real-world scenes.
No commits in the last 6 months.
Use this if you need highly accurate, per-pixel depth information for complex scenes to correctly handle virtual occlusions, especially in augmented reality or computer vision applications.
Not ideal if you're looking for a simple, out-of-the-box tool for general-purpose depth estimation without access to existing 3D scene data or developer expertise.
Stars
87
Forks
5
Language
Python
License
—
Category
Last pushed
May 09, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/nianticlabs/implicit-depth"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vita-epfl/monoloco
A 3D vision library from 2D keypoints: monocular and stereo 3D detection for humans, social...
fangchangma/self-supervised-depth-completion
ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and...
nburrus/stereodemo
Small Python utility to compare and visualize the output of various stereo depth estimation algorithms
JiawangBian/sc_depth_pl
SC-Depth (V1, V2, and V3) for Unsupervised Monocular Depth Estimation ...
wvangansbeke/Sparse-Depth-Completion
Predict dense depth maps from sparse and noisy LiDAR frames guided by RGB images. (Ranked 1st...