czg1225/AsyncDiff

[NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising

39
/ 100
Emerging

This project helps image and video creators, designers, and marketers generate high-quality images and videos much faster using popular diffusion models like Stable Diffusion and AnimateDiff. By parallelizing the denoising process across multiple GPUs, it takes your textual prompts or input images and generates visual content significantly quicker. It is designed for anyone who regularly uses AI to create visual media and needs to speed up their workflow.

212 stars. No commits in the last 6 months.

Use this if you are generating images or videos with diffusion models and want to drastically reduce the time it takes to get your results, especially when using multiple GPUs.

Not ideal if you are only running diffusion models on a single GPU or if generating content speed is not a critical concern for your workflow.

AI-art-generation video-production digital-content-creation generative-design creative-workflow-optimization
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

212

Forks

13

Language

Python

License

Apache-2.0

Last pushed

Sep 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/czg1225/AsyncDiff"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.