IBM/MAX-Toxic-Comment-Classifier
Detect 6 types of toxicity in user comments.
This tool helps content moderators, social media managers, or community administrators automatically identify and flag harmful user comments. You input text, and it tells you if the comment contains toxicity like insults, threats, or hate speech. This helps automate content review workflows, making online spaces safer and more inclusive.
No commits in the last 6 months.
Use this if you need to automatically detect and categorize various types of offensive content in user-generated text, such as forum posts, comments, or reviews.
Not ideal if you require nuanced understanding of context or sarcasm beyond direct toxic language identification, or if you need to classify non-English text.
Stars
56
Forks
31
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/IBM/MAX-Toxic-Comment-Classifier"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unitaryai/detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built...
kensk8er/chicksexer
A Python package for gender classification.
Infinitode/ValX
ValX is an open-source Python package for text cleaning tasks, including profanity detection and...
PavelOstyakov/toxic
Toxic Comment Classification Challenge
minerva-ml/open-solution-toxic-comments
Open solution to the Toxic Comment Classification Challenge