PavelOstyakov/toxic
Toxic Comment Classification Challenge
This tool helps content moderators and online community managers automatically identify and categorize toxic comments. You input raw comment data, and it outputs predictions for different toxicity types. It's designed for anyone needing to efficiently flag harmful language in user-generated content.
266 stars. No commits in the last 6 months.
Use this if you need an automated way to screen large volumes of user comments for toxicity, hate speech, obscenity, or threats.
Not ideal if you need a real-time moderation system for live chats or require extremely nuanced, human-level contextual understanding for borderline cases.
Stars
266
Forks
73
Language
Python
License
MIT
Category
Last pushed
Jan 22, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/PavelOstyakov/toxic"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unitaryai/detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built...
kensk8er/chicksexer
A Python package for gender classification.
Infinitode/ValX
ValX is an open-source Python package for text cleaning tasks, including profanity detection and...
minerva-ml/open-solution-toxic-comments
Open solution to the Toxic Comment Classification Challenge
IBM/MAX-Toxic-Comment-Classifier
Detect 6 types of toxicity in user comments.