czg1225/AsyncDiff
[NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising
This project helps image and video creators, designers, and marketers generate high-quality images and videos much faster using popular diffusion models like Stable Diffusion and AnimateDiff. By parallelizing the denoising process across multiple GPUs, it takes your textual prompts or input images and generates visual content significantly quicker. It is designed for anyone who regularly uses AI to create visual media and needs to speed up their workflow.
212 stars. No commits in the last 6 months.
Use this if you are generating images or videos with diffusion models and want to drastically reduce the time it takes to get your results, especially when using multiple GPUs.
Not ideal if you are only running diffusion models on a single GPU or if generating content speed is not a critical concern for your workflow.
Stars
212
Forks
13
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 27, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/czg1225/AsyncDiff"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
FlorianFuerrutter/genQC
Generative Quantum Circuits
horseee/DeepCache
[CVPR 2024] DeepCache: Accelerating Diffusion Models for Free
Gen-Verse/MMaDA
MMaDA - Open-Sourced Multimodal Large Diffusion Language Models (dLLMs with block diffusion,...
kuleshov-group/mdlm
[NeurIPS 2024] Simple and Effective Masked Diffusion Language Model
Shark-NLP/DiffuSeq
[ICLR'23] DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models