JeremyCJM/DiffSHEG

[CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation

38
/ 100
Emerging

This tool helps create realistic 3D animated characters that express themselves naturally while speaking. You provide an audio file, and it generates corresponding 3D facial expressions and body gestures in real-time. This is ideal for animators, content creators, or game developers looking to automate character animation based on dialogue.

196 stars. No commits in the last 6 months.

Use this if you need to quickly generate lifelike 3D character animations, including both facial expressions and body movements, from spoken audio.

Not ideal if you require highly specific, manually controlled, or stylized animations that do not rely on speech input.

3D-animation character-design virtual-assistants gaming-development content-creation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

196

Forks

16

Language

Python

License

BSD-3-Clause

Last pushed

Apr 30, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/JeremyCJM/DiffSHEG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.