leob03/MultimodalDifMotionPred
[CVPR 2025 - HuMoGen] "MDMP: Multi-modal Diffusion for supervised Motion Predictions with uncertainty"
This project helps animators and researchers generate human motion sequences from text descriptions, offering a way to quickly visualize and evaluate different movements. You provide text prompts like "a person walking" and it outputs video animations of stick figures or fully rendered 3D human models (SMPL meshes). It's ideal for anyone involved in character animation, virtual reality, or human-computer interaction.
No commits in the last 6 months.
Use this if you need to generate diverse, realistic human animations from descriptive text, especially for prototyping or research.
Not ideal if you require frame-perfect, artist-controlled animation without any generative aspects, or if you need to animate non-human characters.
Stars
17
Forks
3
Language
Python
License
MIT
Category
Last pushed
Mar 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/leob03/MultimodalDifMotionPred"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators