UT-Austin-RPL/Ditto

Code for Ditto: Building Digital Twins of Articulated Objects from Interaction

42
/ 100
Emerging

This project helps robotics engineers and researchers create digital twins of movable objects like doors or drawers. By inputting observations from a depth camera before and after an interaction (like opening a drawer), it reconstructs the object's 3D shape and how its parts move. The output is a precise digital model of the articulated object.

125 stars. No commits in the last 6 months.

Use this if you need to quickly and accurately generate digital models of articulated objects from real-world observations for simulations or robotic manipulation tasks.

Not ideal if you are looking for a solution that doesn't require interaction data, or if you only need a static 3D model without articulation information.

robotics digital-twin 3D-reconstruction articulated-objects computer-vision
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

125

Forks

18

Language

Jupyter Notebook

License

MIT

Last pushed

Dec 20, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/UT-Austin-RPL/Ditto"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.