woctezuma/stable-diffusion-safety-checker
Python package to apply the Safety Checker from Stable Diffusion.
This tool helps content moderators and platform managers automatically review images generated by Stable Diffusion. It takes a collection of images as input and identifies those that might contain "bad concepts" such as inappropriate or unsafe content. The output is a list of images flagged for review, helping maintain platform safety and compliance.
Use this if you need to automatically detect potentially unsafe or inappropriate content in image datasets, especially those generated by AI models like Stable Diffusion.
Not ideal if you need to detect highly nuanced or context-specific unsafe content, as it relies on predefined 'bad concepts'.
Stars
9
Forks
2
Language
Python
License
MIT
Category
Last pushed
Dec 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/woctezuma/stable-diffusion-safety-checker"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVlabs/Sana
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer
FoundationVision/VAR
[NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈]...
nerdyrodent/VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
huggingface/finetrainers
Scalable and memory-optimized training of diffusion models
AssemblyAI-Community/MinImagen
MinImagen: A minimal implementation of the Imagen text-to-image model