zc-alexfan/hold

[CVPR 2024✨Highlight] Official repository for HOLD, the first method that jointly reconstructs articulated hands and objects from monocular videos without assuming a pre-scanned object template and 3D hand-object training data.

45
/ 100
Emerging

This tool helps researchers and computer vision engineers reconstruct detailed 3D models of hands interacting with various objects from standard video footage. You provide a monocular video of someone using their hands with an object, and it produces a 3D model of both the hand movements and the object's shape, even for objects it hasn't seen before. It's designed for those working in fields like human-computer interaction, robotics, or animation.

467 stars.

Use this if you need to accurately capture and analyze complex 3D hand-object interactions from existing video recordings, especially when dealing with new or custom objects.

Not ideal if you require real-time 3D reconstruction for live applications or if your primary focus is on facial recognition or full-body pose estimation.

human-computer-interaction robotics 3d-animation computer-vision motion-capture
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

467

Forks

14

Language

Python

License

MIT

Last pushed

Mar 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/zc-alexfan/hold"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.