castorini/daam
Diffusion attentive attribution maps for interpreting Stable Diffusion.
This tool helps AI artists and researchers understand why an AI-generated image looks the way it does. You provide a text prompt, and it outputs the generated image along with 'heat maps' that highlight which parts of the image correspond to specific words in your prompt. This is for anyone who uses Stable Diffusion to create images and wants to interpret the AI's creative process.
788 stars. No commits in the last 6 months. Available on PyPI.
Use this if you generate images with Stable Diffusion and want to see how each word in your prompt influenced the final visual output.
Not ideal if you are looking to generate images without needing to understand the underlying AI's decision-making process.
Stars
788
Forks
69
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Apr 05, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/castorini/daam"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
huggingface/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
bghira/SimpleTuner
A general fine-tuning kit geared toward image/video/audio diffusion models.
mcmonkeyprojects/SwarmUI
SwarmUI (formerly StableSwarmUI), A Modular Stable Diffusion Web-User-Interface, with an...
nateraw/stable-diffusion-videos
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
TheDesignFounder/DreamLayer
Benchmark diffusion models faster. Automate evals, seeds, and metrics for reproducible results.