imrahulr/Toxic-Comment-Classification-Kaggle

Deep Learning for Toxic Comment Classification

28
/ 100
Experimental

This helps online platforms automatically identify and categorize harmful content. You feed it user-submitted text comments, and it tells you if they are toxic, severe toxic, obscene, threatening, insulting, or identity-hating. It's designed for content moderators, community managers, and platform administrators who need to maintain a safe and respectful online environment.

No commits in the last 6 months.

Use this if you manage user-generated content on a forum, social media platform, or comment section and need an automated way to flag or filter out problematic language.

Not ideal if you need to detect highly nuanced forms of hate speech or sarcasm that require deep contextual understanding beyond direct word analysis.

content-moderation online-safety community-management platform-administration user-generated-content
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

14

Forks

5

Language

Jupyter Notebook

License

Last pushed

Jul 30, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/imrahulr/Toxic-Comment-Classification-Kaggle"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.