songlin/d3roma

A diffusion model-based stereo depth estimation framework that can predict and restore noisy depth maps for transparent and specular surfaces

29
/ 100
Experimental

This project helps robots perceive the depth of tricky objects like transparent bottles or shiny metal surfaces, which often confuse standard depth sensors. It takes in stereo images or RGB+raw depth data and outputs a precise depth map and 3D point cloud of the environment. Roboticists, automation engineers, and anyone developing robotic systems for manipulating diverse objects in real-world settings would use this.

No commits in the last 6 months.

Use this if your robotic system struggles with accurately sensing the depth of transparent or specular objects, hindering tasks like grasping or assembly.

Not ideal if your application primarily deals with matte, opaque objects where standard depth sensors already perform well.

robotic-perception depth-sensing material-agnostic-manipulation 3d-reconstruction automation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 12 / 25

How are scores calculated?

Stars

89

Forks

9

Language

Python

License

Last pushed

Feb 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/songlin/d3roma"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.