DiffGesture and DiffuseStyleGesture

Both tools address audio-driven co-speech gesture generation using diffusion models but with different focus: DiffGesture emphasizes the core diffusion-based generation approach while DiffuseStyleGesture extends it with explicit style control, making them complementary techniques that could be combined rather than direct competitors.

DiffGesture
52
Established
DiffuseStyleGesture
50
Established
Maintenance 13/25
Adoption 10/25
Maturity 16/25
Community 13/25
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 18/25
Stars: 261
Forks: 19
Downloads:
Commits (30d): 0
Language: Python
License: GPL-3.0
Stars: 206
Forks: 31
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
No Package No Dependents

About DiffGesture

Advocate99/DiffGesture

[CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation

This project helps create realistic co-speech gestures for virtual characters or avatars, making human-machine interactions more natural. It takes audio recordings of speech as input and generates corresponding body movements, specifically skeleton sequences that define the character's gestures. This is useful for animators, content creators, or researchers working with virtual assistants, digital actors, or interactive simulations.

virtual-avatar-animation character-design human-machine-interaction digital-storytelling virtual-reality

About DiffuseStyleGesture

YoungSeng/DiffuseStyleGesture

DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models (IJCAI 2023) | The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 (ICMI 2023, Reproducibility Award)

This project helps creators generate realistic and expressive body gestures for virtual characters or avatars simply by providing an audio input. It takes an audio file with speech and a specified gesture style (e.g., happy, neutral) and outputs a motion file (BVH) that animates a character's upper body. This is ideal for animators, virtual content creators, or game developers looking to add natural, synchronized gestures to spoken dialogue.

3D-animation virtual-avatars character-design game-development digital-content-creation

Scores updated daily from GitHub, PyPI, and npm data. How scores work