bytedance/OHTA
[CVPR2024] OHTA: One-shot Hand Avatar via Data-driven Implicit Priors
This project helps creators and animators generate realistic, animatable 3D hand models from just a single input image. It takes a photo of a hand and produces a digital hand avatar that can be posed, textured, and integrated into various virtual scenes. This is ideal for artists, game developers, VFX specialists, or researchers working with hand animations.
No commits in the last 6 months.
Use this if you need to quickly create a customizable 3D hand model from a single photograph for animation, visual effects, or virtual reality applications.
Not ideal if you require highly precise, medical-grade anatomical hand models or need to capture complex, multi-view hand motions.
Stars
33
Forks
—
Language
Python
License
MIT
Category
Last pushed
Jun 14, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/bytedance/OHTA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DeepLabCut/DeepLabCut
Official implementation of DeepLabCut: Markerless pose estimation of user-defined features with...
openpifpaf/openpifpaf
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and...
lambdaloop/anipose
🐜🐀🐒🚶 A toolkit for robust markerless 3D pose estimation
DIYer22/bpycv
Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)
NeLy-EPFL/DeepFly3D
Motion capture (markerless 3D pose estimation) pipeline and helper GUI for tethered Drosophila.