shivangi-aneja/FaceTalk

[CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models

31
/ 100
Emerging

This project helps animators and content creators generate realistic 3D talking head animations from audio. You provide an audio clip, and it synthesizes a detailed 3D motion sequence of a human head, complete with natural expressions and subtle movements like hair and eye gestures. This is for professionals in animation, virtual reality, or digital content creation who need high-fidelity, audio-driven 3D character animation.

238 stars. No commits in the last 6 months.

Use this if you need to generate highly realistic and expressive 3D animations of talking human heads from audio inputs for your digital characters or virtual assistants.

Not ideal if you're looking for simple, stylized 2D lip-syncing or if your primary need is general body animation rather than detailed head and facial movements.

3D-animation character-rigging virtual-reality digital-humans content-creation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

238

Forks

3

Language

Shell

License

Last pushed

Mar 17, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/shivangi-aneja/FaceTalk"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.