pabloruizponce/MixerMDM
[CVPR 2025] Official Implementation of "MixerMDM: Learnable Composition of Human Motion Diffusion Models".
This tool helps animators and researchers create realistic human motion sequences from text descriptions. You provide several text prompts describing both overall interactions and individual movements, and it generates a video or 3D animation of people performing those actions. It's designed for professionals working with character animation, virtual environments, or human behavior simulation.
No commits in the last 6 months.
Use this if you need to generate complex, multi-person human animations with fine-grained control over individual and group movements from textual input.
Not ideal if you're looking for a simple drag-and-drop tool without any technical setup or if you only need basic, single-person motions.
Stars
26
Forks
2
Language
Python
License
—
Category
Last pushed
Sep 08, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/pabloruizponce/MixerMDM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators