castorini/daam

Diffusion attentive attribution maps for interpreting Stable Diffusion.

52
/ 100
Established

This tool helps AI artists and researchers understand why an AI-generated image looks the way it does. You provide a text prompt, and it outputs the generated image along with 'heat maps' that highlight which parts of the image correspond to specific words in your prompt. This is for anyone who uses Stable Diffusion to create images and wants to interpret the AI's creative process.

788 stars. No commits in the last 6 months. Available on PyPI.

Use this if you generate images with Stable Diffusion and want to see how each word in your prompt influenced the final visual output.

Not ideal if you are looking to generate images without needing to understand the underlying AI's decision-making process.

AI Art Creative Design Image Generation Machine Learning Interpretability Text-to-Image
Stale 6m No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 17 / 25

How are scores calculated?

Stars

788

Forks

69

Language

Jupyter Notebook

License

MIT

Last pushed

Apr 05, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/castorini/daam"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.