harlanhong/ICCV2023-MCNET

The official code of our ICCV2023 work: Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video Generation

33
/ 100
Emerging

This project helps create realistic talking head videos from a single source image and a driving video. You provide an image of a person you want to animate and a video demonstrating the desired head movements and expressions. The output is a new video of the person from the source image moving and speaking as in the driving video. It's ideal for content creators, marketers, or educators looking to generate dynamic visual content.

255 stars. No commits in the last 6 months.

Use this if you need to animate a static image of a person to speak or move based on a separate video's motion, without needing to film the person directly.

Not ideal if you need to create entirely new, unique human animations from scratch or if you require fine-grained control over individual facial feature movements beyond what a driving video provides.

video-production digital-avatar-creation content-generation marketing-material e-learning-content
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

255

Forks

24

Language

Python

License

Last pushed

Oct 05, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/harlanhong/ICCV2023-MCNET"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.