harlanhong/ICCV2023-MCNET
The official code of our ICCV2023 work: Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video Generation
This project helps create realistic talking head videos from a single source image and a driving video. You provide an image of a person you want to animate and a video demonstrating the desired head movements and expressions. The output is a new video of the person from the source image moving and speaking as in the driving video. It's ideal for content creators, marketers, or educators looking to generate dynamic visual content.
255 stars. No commits in the last 6 months.
Use this if you need to animate a static image of a person to speak or move based on a separate video's motion, without needing to film the person directly.
Not ideal if you need to create entirely new, unique human animations from scratch or if you require fine-grained control over individual facial feature movements beyond what a driving video provides.
Stars
255
Forks
24
Language
Python
License
—
Category
Last pushed
Oct 05, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/harlanhong/ICCV2023-MCNET"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators