PavelOstyakov/toxic

Toxic Comment Classification Challenge

49
/ 100
Emerging

This tool helps content moderators and online community managers automatically identify and categorize toxic comments. You input raw comment data, and it outputs predictions for different toxicity types. It's designed for anyone needing to efficiently flag harmful language in user-generated content.

266 stars. No commits in the last 6 months.

Use this if you need an automated way to screen large volumes of user comments for toxicity, hate speech, obscenity, or threats.

Not ideal if you need a real-time moderation system for live chats or require extremely nuanced, human-level contextual understanding for borderline cases.

content-moderation online-community-management brand-safety user-generated-content social-media-management
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

266

Forks

73

Language

Python

License

MIT

Last pushed

Jan 22, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/PavelOstyakov/toxic"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.