shivangi-aneja/FaceTalk
[CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models
This project helps animators and content creators generate realistic 3D talking head animations from audio. You provide an audio clip, and it synthesizes a detailed 3D motion sequence of a human head, complete with natural expressions and subtle movements like hair and eye gestures. This is for professionals in animation, virtual reality, or digital content creation who need high-fidelity, audio-driven 3D character animation.
238 stars. No commits in the last 6 months.
Use this if you need to generate highly realistic and expressive 3D animations of talking human heads from audio inputs for your digital characters or virtual assistants.
Not ideal if you're looking for simple, stylized 2D lip-syncing or if your primary need is general body animation rather than detailed head and facial movements.
Stars
238
Forks
3
Language
Shell
License
—
Category
Last pushed
Mar 17, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/shivangi-aneja/FaceTalk"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators