cvondrick/videogan
Generating Videos with Scene Dynamics. NIPS 2016.
This project helps researchers and artists create short, dynamic video clips from scratch. By training the system on existing videos of a specific scene type (like a beach or a golf course), it learns to generate new, unique video sequences that show realistic motion for that scene. The output is a tiny video, 'hallucinated' by the model, which can be used for creative projects or studying generative models. This tool is for researchers and creatives exploring synthetic video generation.
713 stars. No commits in the last 6 months.
Use this if you need to generate short, artificial video clips that mimic the realistic motions of a specific environment or action.
Not ideal if you need to generate photorealistic, high-resolution, or long videos, or if you require precise control over the generated content.
Stars
713
Forks
141
Language
Lua
License
—
Category
Last pushed
May 03, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/cvondrick/videogan"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
huggingface/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
bghira/SimpleTuner
A general fine-tuning kit geared toward image/video/audio diffusion models.
mcmonkeyprojects/SwarmUI
SwarmUI (formerly StableSwarmUI), A Modular Stable Diffusion Web-User-Interface, with an...
nateraw/stable-diffusion-videos
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
TheDesignFounder/DreamLayer
Benchmark diffusion models faster. Automate evals, seeds, and metrics for reproducible results.