Vchitect/SEINE

[ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction

42
/ 100
Emerging

This project helps video creators and marketers turn static images into dynamic, short-to-long video clips. You provide a single image or a pair of images along with a text prompt describing the desired motion or transition. The output is a realistic video that animates the image or smoothly transitions between two scenes, ideal for social media content or visual storytelling.

969 stars. No commits in the last 6 months.

Use this if you need to quickly generate engaging video content from still images or create seamless visual transitions between different scenes without complex video editing software.

Not ideal if you need to edit existing video footage, perform precise frame-by-frame animation control, or generate extremely long-form cinematic productions.

video-production social-media-content marketing-assets visual-storytelling digital-art
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

969

Forks

65

Language

Python

License

Apache-2.0

Last pushed

Nov 13, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Vchitect/SEINE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.