gmkim-ai/Diffusion-Video-Autoencoders

An official implementation of "Diffusion Video Autoencoders: Toward Temporally Consistent Face Video Editing via Disentangled Video Encoding" (CVPR 2023) in PyTorch.

36
/ 100
Emerging

This project helps video editors and content creators modify facial features in video footage while maintaining a consistent look across all frames. You input a video (as a sequence of image frames) and text descriptions or pre-defined attributes, and it outputs an edited video where the desired facial changes are smoothly applied without flickering. This is ideal for professionals creating or refining video content.

150 stars. No commits in the last 6 months.

Use this if you need to edit facial attributes in videos, such as adding a beard or changing hair color, and require the edits to appear natural and consistent frame-to-frame.

Not ideal if you're looking for a simple drag-and-drop video editor, as this tool requires preparing video frames and running command-line scripts.

video-editing post-production content-creation face-manipulation visual-effects
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

150

Forks

8

Language

Python

License

MIT

Last pushed

Oct 18, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/gmkim-ai/Diffusion-Video-Autoencoders"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.