G-U-N/Gen-L-Video
The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".
This project helps video creators, marketers, or educators generate long, multi-segment videos from various text descriptions. Instead of being limited to very short clips with a single theme, you can input multiple text prompts to guide the creation of extended videos with different scenes or narratives. This is ideal for anyone needing to produce longer video content that evolves semantically over time.
307 stars.
Use this if you need to generate videos that are hundreds of frames long, with different textual descriptions dictating various segments, and want to achieve this without extensive new model training.
Not ideal if you only need to generate very short, single-theme video clips or if you are not comfortable working with command-line tools and code for video generation.
Stars
307
Forks
34
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Oct 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/G-U-N/Gen-L-Video"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators