huggingface/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
This library helps developers and researchers create or use AI models that generate new images, audio, or even molecular structures. You provide text descriptions or existing data, and it outputs novel visual, auditory, or structural content. It's designed for machine learning practitioners and AI artists.
33,029 stars. Used by 55 other packages. Actively maintained with 85 commits in the last 30 days. Available on PyPI.
Use this if you are an AI developer or researcher who wants to implement, customize, or experiment with state-of-the-art diffusion models for content generation.
Not ideal if you are looking for a no-code tool or a simple application to generate media without needing to write code or understand machine learning concepts.
Stars
33,029
Forks
6,832
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
85
Dependencies
9
Reverse dependents
55
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/huggingface/diffusers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Recent Releases
Compare
Related models
bghira/SimpleTuner
A general fine-tuning kit geared toward image/video/audio diffusion models.
mcmonkeyprojects/SwarmUI
SwarmUI (formerly StableSwarmUI), A Modular Stable Diffusion Web-User-Interface, with an...
nateraw/stable-diffusion-videos
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
TheDesignFounder/DreamLayer
Benchmark diffusion models faster. Automate evals, seeds, and metrics for reproducible results.
AUTOMATIC1111/stable-diffusion-webui
Stable Diffusion web UI