imrahulr/Toxic-Comment-Classification-Kaggle
Deep Learning for Toxic Comment Classification
This helps online platforms automatically identify and categorize harmful content. You feed it user-submitted text comments, and it tells you if they are toxic, severe toxic, obscene, threatening, insulting, or identity-hating. It's designed for content moderators, community managers, and platform administrators who need to maintain a safe and respectful online environment.
No commits in the last 6 months.
Use this if you manage user-generated content on a forum, social media platform, or comment section and need an automated way to flag or filter out problematic language.
Not ideal if you need to detect highly nuanced forms of hate speech or sarcasm that require deep contextual understanding beyond direct word analysis.
Stars
14
Forks
5
Language
Jupyter Notebook
License
—
Category
Last pushed
Jul 30, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/imrahulr/Toxic-Comment-Classification-Kaggle"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unitaryai/detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built...
kensk8er/chicksexer
A Python package for gender classification.
Infinitode/ValX
ValX is an open-source Python package for text cleaning tasks, including profanity detection and...
PavelOstyakov/toxic
Toxic Comment Classification Challenge
minerva-ml/open-solution-toxic-comments
Open solution to the Toxic Comment Classification Challenge