ivanvovk/controllable-face-generation

Controllable Face Generation via pretrained Conditional Adversarial Latent Autoencoder (ALAE)

37
/ 100
Emerging

This tool helps animators, content creators, or visual effects artists transfer facial expressions and head movements from one person's video onto another person's static image. You provide a source video of someone talking or moving their head, and a target image of another person's face. The output is an animated GIF of the target person's face mimicking the expressions and movements from the source video.

No commits in the last 6 months.

Use this if you need to quickly animate a still image of a face using the motions and expressions from a reference video.

Not ideal if you require perfect identity preservation and photo-realistic quality for the animated face, as some fine details may be lost.

facial animation video synthesis digital avatars content creation visual effects
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

20

Forks

4

Language

Jupyter Notebook

License

MIT

Last pushed

Jun 09, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/ivanvovk/controllable-face-generation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.