unitaryai/detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at contact@unitary.ai.
This project helps content moderators, community managers, and platform administrators automatically identify harmful content in online discussions. You input a comment or a list of comments, and it tells you if each comment is toxic, severely toxic, obscene, a threat, an insult, an identity attack, or sexually explicit. It's designed for anyone managing online communities or user-generated content.
1,202 stars. Used by 4 other packages. Actively maintained with 2 commits in the last 30 days. Available on PyPI.
Use this if you need to quickly flag potentially harmful comments across multiple languages to maintain a safe and respectful online environment.
Not ideal if you need to perfectly capture nuanced humor or self-deprecating comments, as it may misclassify them as toxic.
Stars
1,202
Forks
141
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 05, 2026
Commits (30d)
2
Dependencies
3
Reverse dependents
4
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/unitaryai/detoxify"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
kensk8er/chicksexer
A Python package for gender classification.
Infinitode/ValX
ValX is an open-source Python package for text cleaning tasks, including profanity detection and...
PavelOstyakov/toxic
Toxic Comment Classification Challenge
minerva-ml/open-solution-toxic-comments
Open solution to the Toxic Comment Classification Challenge
IBM/MAX-Toxic-Comment-Classifier
Detect 6 types of toxicity in user comments.