MKFMIKU/vidm
[AAAI23 Oral] Official implementations of Video Implicit Diffusion Models
This project helps researchers in computer vision generate realistic, high-quality video content. It takes a conceptual understanding of desired motion and visual style, then synthesizes a new video from scratch. The primary users are researchers and practitioners exploring advanced video generation techniques.
No commits in the last 6 months.
Use this if you need to generate diverse and high-quality synthetic videos, especially for research in video generation models or visual content creation.
Not ideal if you're looking for a simple, out-of-the-box tool for editing existing videos or if you don't have experience with deep learning frameworks.
Stars
68
Forks
4
Language
Python
License
MIT
Category
Last pushed
Nov 01, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/MKFMIKU/vidm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators