Vicky0522/I2VEdit
[SIGGRAPH Asia 2024] I2VEdit: First-Frame-Guided Video Editing via Image-to-Video Diffusion Models
This tool helps video editors or content creators apply detailed edits from a single image frame across an entire video. You provide an original video and an edited version of its very first frame, and it produces a new video where the edits from that first frame are consistently applied throughout. This is ideal for quickly transforming video content based on precise image edits.
No commits in the last 6 months.
Use this if you need to propagate specific stylistic or content changes made to a single video frame across an entire video sequence while maintaining motion.
Not ideal if you need to make different edits at various points within the video, as it primarily works by extending a first-frame edit.
Stars
83
Forks
4
Language
Python
License
—
Category
Last pushed
Jun 23, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Vicky0522/I2VEdit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators