zxz267/AvatarJLM
[ICCV 2023] Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling
This project helps animators, game developers, or virtual reality content creators produce realistic full-body movements for 3D characters, even when only limited data from head and hand trackers is available. It takes sparse tracking signals from head and hands and outputs accurate, smooth, and believable full-body motion data that can be applied to 3D avatars. The primary users are professionals who need to create natural character animations efficiently.
No commits in the last 6 months.
Use this if you need to generate high-quality, realistic full-body animations for 3D characters using only input from head and hand tracking devices.
Not ideal if you already have dense, full-body motion capture data or if your application requires real-time, ultra-low latency tracking directly from live camera feeds without an intermediate processing step.
Stars
52
Forks
5
Language
Python
License
MIT
Category
Last pushed
Feb 29, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/zxz267/AvatarJLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DeepLabCut/DeepLabCut
Official implementation of DeepLabCut: Markerless pose estimation of user-defined features with...
openpifpaf/openpifpaf
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and...
lambdaloop/anipose
🐜🐀🐒🚶 A toolkit for robust markerless 3D pose estimation
DIYer22/bpycv
Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)
NeLy-EPFL/DeepFly3D
Motion capture (markerless 3D pose estimation) pipeline and helper GUI for tethered Drosophila.