Hramchenko/diffusion_distiller

🚀 PyTorch Implementation of "Progressive Distillation for Fast Sampling of Diffusion Models(v-diffusion)"

44
/ 100
Emerging

This project helps researchers and artists working with AI-powered image generation to create high-quality images much faster. It takes an existing diffusion model that generates images and optimizes it. The result is a new, 'distilled' version of the model that produces similar image quality in significantly fewer steps and less time, making the image generation process more efficient for anyone using these models.

260 stars. No commits in the last 6 months.

Use this if you need to rapidly generate images using diffusion models and are looking to drastically reduce the time and computational resources required for each image, even if it means a slight degradation in quality.

Not ideal if absolute, pixel-perfect fidelity to the original model's output is your highest priority, or if you're not already working with diffusion models for image generation.

AI-art image-generation computational-efficiency creative-AI diffusion-models
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

260

Forks

34

Language

Python

License

MIT

Last pushed

May 31, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Hramchenko/diffusion_distiller"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.