IBM/MAX-Toxic-Comment-Classifier

Detect 6 types of toxicity in user comments.

46
/ 100
Emerging

This tool helps content moderators, social media managers, or community administrators automatically identify and flag harmful user comments. You input text, and it tells you if the comment contains toxicity like insults, threats, or hate speech. This helps automate content review workflows, making online spaces safer and more inclusive.

No commits in the last 6 months.

Use this if you need to automatically detect and categorize various types of offensive content in user-generated text, such as forum posts, comments, or reviews.

Not ideal if you require nuanced understanding of context or sarcasm beyond direct toxic language identification, or if you need to classify non-English text.

content-moderation online-community-management brand-reputation social-listening customer-feedback-analysis
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

56

Forks

31

Language

Python

License

Apache-2.0

Last pushed

Sep 17, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/IBM/MAX-Toxic-Comment-Classifier"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.