Prakhar-FF13/Toxic-Comments-Classification
Predict the toxicity rating of comment made by the user.
This tool helps content moderators, community managers, and social media platforms automatically identify and flag toxic user comments. You provide a user-generated comment, and it outputs a numerical score indicating its toxicity level, helping maintain a safer online environment.
No commits in the last 6 months.
Use this if you need to quickly assess and moderate a large volume of user comments for toxicity, ranging from mild to severe.
Not ideal if you need a detailed breakdown of specific toxicity subtypes (e.g., threat, insult, sexual explicit) or explanations for why a comment was flagged.
Stars
49
Forks
33
Language
HTML
License
—
Category
Last pushed
May 28, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Prakhar-FF13/Toxic-Comments-Classification"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unitaryai/detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built...
kensk8er/chicksexer
A Python package for gender classification.
Infinitode/ValX
ValX is an open-source Python package for text cleaning tasks, including profanity detection and...
PavelOstyakov/toxic
Toxic Comment Classification Challenge
minerva-ml/open-solution-toxic-comments
Open solution to the Toxic Comment Classification Challenge