sevdaimany/NLP-Toxic-Comment-Classification
Natural Language Processing: Toxic Comments Detection and Classification
This helps moderators and community managers automatically identify and flag toxic comments in user-generated content. You feed it raw text comments, and it tells you if a comment is toxic or clean. This is designed for anyone managing online communities or platforms where user comments need to be screened.
No commits in the last 6 months.
Use this if you need a quick way to filter large volumes of text comments for toxicity.
Not ideal if you require a nuanced understanding of intent or highly specific categories of harmful speech beyond basic toxicity.
Stars
15
Forks
—
Language
Python
License
—
Category
Last pushed
Jul 20, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/sevdaimany/NLP-Toxic-Comment-Classification"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unitaryai/detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built...
kensk8er/chicksexer
A Python package for gender classification.
Infinitode/ValX
ValX is an open-source Python package for text cleaning tasks, including profanity detection and...
PavelOstyakov/toxic
Toxic Comment Classification Challenge
minerva-ml/open-solution-toxic-comments
Open solution to the Toxic Comment Classification Challenge