thefcraft/nsfw-prompt-detection-sd
NSFW Prompt Detection for Stable Diffusion
This tool helps content moderators and platform administrators working with Stable Diffusion to automatically identify and flag prompts that are not safe for work (NSFW). It takes text prompts and associated negative prompts as input and classifies them, indicating whether the content is likely to generate NSFW images. This allows for proactive content moderation and helps maintain a safe online environment.
No commits in the last 6 months.
Use this if you need to automatically screen user-generated text prompts for potentially inappropriate or explicit content before they are used to generate images on platforms like Stable Diffusion.
Not ideal if you need to detect NSFW content directly within generated images, as this tool focuses solely on text prompts.
Stars
34
Forks
2
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 13, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/thefcraft/nsfw-prompt-detection-sd"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
infinitered/nsfwjs
NSFW detection on the client-side via TensorFlow.js
alex000kim/nsfw_data_scraper
Collection of scripts to aggregate image data for the purposes of training an NSFW Image Classifier
mdietrichstein/tensorflow-open_nsfw
Tensorflow Implementation of Yahoo's Open NSFW Model
GantMan/nsfw_model
Keras model of NSFW detector
infinitered/nsfwjs-mobile
NSFWjs in React Native