tub-rip/DERD-Net
DERD-Net: Learning Depth from Event-based Ray Densities (NeurIPS 2025 Spotlight)
This project helps engineers and researchers working with event cameras to accurately determine the distance of objects in dynamic environments. It takes event camera data, often from drones or autonomous vehicles, and transforms it into detailed pixel-by-pixel depth maps. This is useful for anyone building systems that need precise spatial awareness, like for drone navigation or robot localization.
Use this if you need highly accurate, real-time depth measurements from event-based cameras, especially in scenarios where traditional cameras struggle due to high speed or low light.
Not ideal if your primary data source is standard frame-based cameras or if you require depth information without the specialized input from event cameras.
Stars
16
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Nov 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/tub-rip/DERD-Net"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
3DOM-FBK/deep-image-matching
Multiview matching with deep-learning and hand-crafted local features for COLMAP and other SfM...
suhangpro/mvcnn
Multi-view CNN (MVCNN) for shape recognition
zouchuhang/LayoutNet
Torch implementation of our CVPR 18 paper: "LayoutNet: Reconstructing the 3D Room Layout from a...
andyzeng/tsdf-fusion-python
Python code to fuse multiple RGB-D images into a TSDF voxel volume.
andyzeng/tsdf-fusion
Fuse multiple depth frames into a TSDF voxel volume.