UT-Austin-RPL/Ditto
Code for Ditto: Building Digital Twins of Articulated Objects from Interaction
This project helps robotics engineers and researchers create digital twins of movable objects like doors or drawers. By inputting observations from a depth camera before and after an interaction (like opening a drawer), it reconstructs the object's 3D shape and how its parts move. The output is a precise digital model of the articulated object.
125 stars. No commits in the last 6 months.
Use this if you need to quickly and accurately generate digital models of articulated objects from real-world observations for simulations or robotic manipulation tasks.
Not ideal if you are looking for a solution that doesn't require interaction data, or if you only need a static 3D model without articulation information.
Stars
125
Forks
18
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Dec 20, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/UT-Austin-RPL/Ditto"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
talmolab/sleap
A deep learning framework for multi-animal pose tracking.
kennymckormick/pyskl
A toolbox for skeleton-based action recognition.
open-mmlab/mmaction2
OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
jgraving/DeepPoseKit
a toolkit for pose estimation using deep learning
kenshohara/3D-ResNets-PyTorch
3D ResNets for Action Recognition (CVPR 2018)