zc-alexfan/hold
[CVPR 2024✨Highlight] Official repository for HOLD, the first method that jointly reconstructs articulated hands and objects from monocular videos without assuming a pre-scanned object template and 3D hand-object training data.
This tool helps researchers and computer vision engineers reconstruct detailed 3D models of hands interacting with various objects from standard video footage. You provide a monocular video of someone using their hands with an object, and it produces a 3D model of both the hand movements and the object's shape, even for objects it hasn't seen before. It's designed for those working in fields like human-computer interaction, robotics, or animation.
467 stars.
Use this if you need to accurately capture and analyze complex 3D hand-object interactions from existing video recordings, especially when dealing with new or custom objects.
Not ideal if you require real-time 3D reconstruction for live applications or if your primary focus is on facial recognition or full-body pose estimation.
Stars
467
Forks
14
Language
Python
License
MIT
Category
Last pushed
Mar 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/zc-alexfan/hold"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DeepLabCut/DeepLabCut
Official implementation of DeepLabCut: Markerless pose estimation of user-defined features with...
openpifpaf/openpifpaf
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and...
lambdaloop/anipose
🐜🐀🐒🚶 A toolkit for robust markerless 3D pose estimation
DIYer22/bpycv
Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)
NeLy-EPFL/DeepFly3D
Motion capture (markerless 3D pose estimation) pipeline and helper GUI for tethered Drosophila.