wenhao728/VORTA
The code implementation of paper "VORTA: Efficient Video Diffusion via Routing Sparse Attention"
This project dramatically speeds up the creation of AI-generated videos from text descriptions. It takes a text prompt and an existing video generation model, then produces a video much faster than before. This is for AI researchers and practitioners working with advanced video diffusion models who need to generate results more quickly and efficiently.
Use this if you are generating videos using diffusion models like HunyuanVideo or Wan 2.1 and want to significantly reduce the time and computational resources required.
Not ideal if you are a casual user looking for a simple, out-of-the-box video creation tool without needing to engage with underlying model architectures.
Stars
11
Forks
—
Language
Python
License
MIT
Category
Last pushed
Oct 15, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/wenhao728/VORTA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators