sihyun-yu/PVDM
[CVPR'23] Video Probabilistic Diffusion Models in Projected Latent Space
This project helps researchers and engineers create new video content from existing footage. You input a collection of videos, and the system learns their patterns to generate novel, realistic video clips that match the style and content of the originals. This is ideal for those working in video synthesis, content generation, or animation research.
324 stars. No commits in the last 6 months.
Use this if you need to generate high-quality, realistic video sequences by learning from a dataset of existing videos.
Not ideal if you're looking for text-to-video generation out-of-the-box or require a system that works on highly complex, unconstrained datasets without significant custom training.
Stars
324
Forks
16
Language
Python
License
MIT
Category
Last pushed
May 14, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/sihyun-yu/PVDM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators