ivanvovk/controllable-face-generation
Controllable Face Generation via pretrained Conditional Adversarial Latent Autoencoder (ALAE)
This tool helps animators, content creators, or visual effects artists transfer facial expressions and head movements from one person's video onto another person's static image. You provide a source video of someone talking or moving their head, and a target image of another person's face. The output is an animated GIF of the target person's face mimicking the expressions and movements from the source video.
No commits in the last 6 months.
Use this if you need to quickly animate a still image of a face using the motions and expressions from a reference video.
Not ideal if you require perfect identity preservation and photo-realistic quality for the animated face, as some fine details may be lost.
Stars
20
Forks
4
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jun 09, 2020
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/ivanvovk/controllable-face-generation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Blade6570/icface
ICface: Interpretable and Controllable Face Reenactment Using GANs
HaiyuWu/Vec2Face
This is the official implementation of "Vec2Face: Scaling Face Dataset Generation with Loosely...
anisha2102/sketch2face
Conversion of sketches to photos using GANs.
bryandlee/naver-webtoon-faces
Generative models on NAVER Webtoon faces
bryandlee/malnyun_faces
침착한 생성모델 학습기