Vchitect/SEINE
[ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction
This project helps video creators and marketers turn static images into dynamic, short-to-long video clips. You provide a single image or a pair of images along with a text prompt describing the desired motion or transition. The output is a realistic video that animates the image or smoothly transitions between two scenes, ideal for social media content or visual storytelling.
969 stars. No commits in the last 6 months.
Use this if you need to quickly generate engaging video content from still images or create seamless visual transitions between different scenes without complex video editing software.
Not ideal if you need to edit existing video footage, perform precise frame-by-frame animation control, or generate extremely long-form cinematic productions.
Stars
969
Forks
65
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 13, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Vchitect/SEINE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators