hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
This tool helps video creators and content producers quickly generate high-quality videos from text descriptions or images. You input a prompt (text describing the desired video) or an image, and it outputs a new video, saving you significant time and computational resources. It's designed for professionals who need to rapidly produce video content without deep technical expertise in AI models.
3,232 stars. Used by 1 other package. Actively maintained with 42 commits in the last 30 days. Available on PyPI.
Use this if you need to generate short, high-resolution videos from text or images quickly and efficiently, leveraging state-of-the-art AI models.
Not ideal if you require extremely long video outputs or highly complex scene interactions that demand detailed, frame-by-frame control.
Stars
3,232
Forks
286
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 17, 2026
Commits (30d)
42
Dependencies
44
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/hao-ai-lab/FastVideo"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related models
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators
Fantasy-AMAP/fantasy-talking
[ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis