Advocate99/DiffGesture

[CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation

52
/ 100
Established

This project helps create realistic co-speech gestures for virtual characters or avatars, making human-machine interactions more natural. It takes audio recordings of speech as input and generates corresponding body movements, specifically skeleton sequences that define the character's gestures. This is useful for animators, content creators, or researchers working with virtual assistants, digital actors, or interactive simulations.

261 stars.

Use this if you need to animate virtual avatars with natural, synchronized gestures based on spoken audio.

Not ideal if you need to generate gestures from non-speech audio or if you're looking for a simple drag-and-drop animation solution without coding.

virtual-avatar-animation character-design human-machine-interaction digital-storytelling virtual-reality
No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

261

Forks

19

Language

Python

License

GPL-3.0

Last pushed

Mar 18, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Advocate99/DiffGesture"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.