DiffGesture and DiffuseStyleGesture
Both tools address audio-driven co-speech gesture generation using diffusion models but with different focus: DiffGesture emphasizes the core diffusion-based generation approach while DiffuseStyleGesture extends it with explicit style control, making them complementary techniques that could be combined rather than direct competitors.
About DiffGesture
Advocate99/DiffGesture
[CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation
This project helps create realistic co-speech gestures for virtual characters or avatars, making human-machine interactions more natural. It takes audio recordings of speech as input and generates corresponding body movements, specifically skeleton sequences that define the character's gestures. This is useful for animators, content creators, or researchers working with virtual assistants, digital actors, or interactive simulations.
About DiffuseStyleGesture
YoungSeng/DiffuseStyleGesture
DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models (IJCAI 2023) | The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 (ICMI 2023, Reproducibility Award)
This project helps creators generate realistic and expressive body gestures for virtual characters or avatars simply by providing an audio input. It takes an audio file with speech and a specified gesture style (e.g., happy, neutral) and outputs a motion file (BVH) that animates a character's upper body. This is ideal for animators, virtual content creators, or game developers looking to add natural, synchronized gestures to spoken dialogue.
Scores updated daily from GitHub, PyPI, and npm data. How scores work