Sylver-Icy/nsfw-classifier

Modular NSFW Text Classification Pipeline A fully modular text classification system built with Python and scikit-learn. Designed to scrape, clean, label, train, and evaluate NSFW/SFW text datasets using TF-IDF and Logistic Regression. Built as a learning project with real-world structure and scalability in mind.

17
/ 100
Experimental

Automatically identify and filter inappropriate user-generated text, such as comments or forum posts. It takes raw text from sources like Reddit, cleans it up, and then classifies it as either safe-for-work (SFW) or not-safe-for-work (NSFW). This tool is ideal for content moderators, community managers, or social media analysts who need to maintain a clean online environment.

Use this if you need a flexible system to scrape, clean, and automatically categorize text content to flag potentially offensive or explicit language.

Not ideal if you need a highly accurate, production-ready content moderation system for extremely high-stakes environments, as its initial model is for learning and experimentation.

content-moderation community-management social-listening online-safety text-filtering
No License No Package No Dependents
Maintenance 6 / 25
Adoption 4 / 25
Maturity 7 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

Last pushed

Dec 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Sylver-Icy/nsfw-classifier"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.