YBYBZhang/ControlVideo
[ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"
This tool helps animators and content creators quickly generate videos from existing clips and text descriptions. You provide a video, a text prompt describing the desired output, and specify how the tool should interpret the input video's structure (like its depth, edges, or human poses). It then produces a new video that follows the input's motion and structure while matching your text prompt's style and content.
861 stars. No commits in the last 6 months.
Use this if you need to create custom animated content or visual effects by transforming an existing video's motion and structure into a new scene described by text.
Not ideal if you need to generate video entirely from scratch without an existing structural reference, or if you require precise control over every frame's artistic details.
Stars
861
Forks
63
Language
Python
License
MIT
Category
Last pushed
Oct 12, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/YBYBZhang/ControlVideo"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators