LinghaoChan/HumanMAC
[ICCV-2023] Official code for work "HumanMAC: Masked Motion Completion for Human Motion Prediction".
This project helps animators, game developers, or researchers predict natural human movements based on existing motion data. You input a sequence of human poses, and it outputs a highly realistic continuation of that motion, allowing for predictions across various activities and with controllable body parts. This is ideal for anyone working with virtual humans who needs to generate believable and continuous character animations.
323 stars. No commits in the last 6 months.
Use this if you need to extend short clips of human motion into longer, natural-looking sequences or generate varied motion predictions for a given starting pose.
Not ideal if you need to generate motions from text descriptions, as this tool focuses on predicting continuations from visual motion data.
Stars
323
Forks
19
Language
Python
License
MIT
Category
Last pushed
May 05, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/LinghaoChan/HumanMAC"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators