shgaurav1/DVG
Diverse Video Generation using a Gaussian Process Trigger
This project helps researchers and developers working with computer vision models to generate a variety of video sequences. It takes existing video datasets, like the KTH action recognition dataset, and outputs new, diverse video frames. This is useful for those who need to expand their training data or test the robustness of their video analysis systems.
No commits in the last 6 months.
Use this if you need to generate multiple, varied video sequences from a limited set of original videos, especially for training or evaluating AI models.
Not ideal if you're looking for a simple, out-of-the-box solution for creating high-resolution, photorealistic videos for general creative or production purposes.
Stars
18
Forks
9
Language
Python
License
—
Category
Last pushed
Dec 13, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/shgaurav1/DVG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
huggingface/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
bghira/SimpleTuner
A general fine-tuning kit geared toward image/video/audio diffusion models.
mcmonkeyprojects/SwarmUI
SwarmUI (formerly StableSwarmUI), A Modular Stable Diffusion Web-User-Interface, with an...
nateraw/stable-diffusion-videos
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
TheDesignFounder/DreamLayer
Benchmark diffusion models faster. Automate evals, seeds, and metrics for reproducible results.