leob03/MultimodalDifMotionPred

[CVPR 2025 - HuMoGen] "MDMP: Multi-modal Diffusion for supervised Motion Predictions with uncertainty"

35
/ 100
Emerging

This project helps animators and researchers generate human motion sequences from text descriptions, offering a way to quickly visualize and evaluate different movements. You provide text prompts like "a person walking" and it outputs video animations of stick figures or fully rendered 3D human models (SMPL meshes). It's ideal for anyone involved in character animation, virtual reality, or human-computer interaction.

No commits in the last 6 months.

Use this if you need to generate diverse, realistic human animations from descriptive text, especially for prototyping or research.

Not ideal if you require frame-perfect, artist-controlled animation without any generative aspects, or if you need to animate non-human characters.

human-animation motion-synthesis character-design virtual-reality computational-creativity
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

17

Forks

3

Language

Python

License

MIT

Last pushed

Mar 12, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/leob03/MultimodalDifMotionPred"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.