Yi-Shi94/AMDM
Interactive Character Control with Auto-Regressive Motion Diffusion Models
This project helps animators and game developers create realistic, interactive character movements. You input existing motion capture data, and it generates diverse, continuous animations that can be controlled in real-time. This is ideal for those building virtual worlds or interactive experiences where characters need to respond dynamically.
194 stars. No commits in the last 6 months.
Use this if you need to generate high-quality, auto-regressive character animations from motion capture data and want interactive control over the generated movements.
Not ideal if you are looking to animate non-human characters or require purely physics-based simulations without leveraging motion capture inputs.
Stars
194
Forks
13
Language
Python
License
BSD-3-Clause
Category
Last pushed
Oct 26, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Yi-Shi94/AMDM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators