Hramchenko/diffusion_distiller
🚀 PyTorch Implementation of "Progressive Distillation for Fast Sampling of Diffusion Models(v-diffusion)"
This project helps researchers and artists working with AI-powered image generation to create high-quality images much faster. It takes an existing diffusion model that generates images and optimizes it. The result is a new, 'distilled' version of the model that produces similar image quality in significantly fewer steps and less time, making the image generation process more efficient for anyone using these models.
260 stars. No commits in the last 6 months.
Use this if you need to rapidly generate images using diffusion models and are looking to drastically reduce the time and computational resources required for each image, even if it means a slight degradation in quality.
Not ideal if absolute, pixel-perfect fidelity to the original model's output is your highest priority, or if you're not already working with diffusion models for image generation.
Stars
260
Forks
34
Language
Python
License
MIT
Category
Last pushed
May 31, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Hramchenko/diffusion_distiller"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
quantgirluk/aleatory
📦 Python library for Stochastic Processes Simulation and Visualisation
blei-lab/treeffuser
Treeffuser is an easy-to-use package for probabilistic prediction and probabilistic regression...
TuftsBCB/RegDiffusion
Diffusion model for gene regulatory network inference.
yuanchenyang/smalldiffusion
Simple and readable code for training and sampling from diffusion models
chairc/Integrated-Design-Diffusion-Model
IDDM (Industrial, landscape, animate, latent diffusion), support LDM, DDPM, DDIM, PLMS, webui...