GitGud-f/Delta
A Lightweight Deep Learning Model for Depth Estimation. Aimed to run on edge devices and provide near-real-time results. This project uses knowledge distillation technology to transfer knowledge from DepthAnythingV2 to a smaller model
This project helps you create a specialized computer vision model that can understand the 3D layout of a scene from a single 2D image, much like a human does. You input a regular photo, and it outputs a 'depth map' showing how far away objects are. It's designed for mobile apps or devices that need to quickly analyze images, making it useful for augmented reality, robotics, or any application requiring real-time spatial awareness.
No commits in the last 6 months.
Use this if you need to integrate fast, accurate depth perception from standard camera images into mobile applications or edge devices where computational resources are limited.
Not ideal if you require extremely precise, lidar-quality depth measurements for scientific or industrial applications where even minor inaccuracies are critical.
Stars
18
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Sep 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/GitGud-f/Delta"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cake-lab/HybridDepth
Official implementation for HybridDepth Model [WACV 2025, ISMAR 2024]
ialhashim/DenseDepth
High Quality Monocular Depth Estimation via Transfer Learning
soubhiksanyal/RingNet
Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
nianticlabs/monodepth2
[ICCV 2019] Monocular depth estimation from a single image
tinghuiz/SfMLearner
An unsupervised learning framework for depth and ego-motion estimation from monocular videos