arunarn2/ToxicCommentChallenge

Text classification using GloVe embeddings, CNN and stacked bi-directional LSTM with Max K Pooling.

25
/ 100
Experimental

This project helps online community managers and content moderators automatically identify and classify harmful online comments across six categories like 'toxic', 'obscene', and 'threat'. It takes raw user comments as input and outputs classifications indicating the type of toxicity present. It's designed for anyone managing online forums, social media, or other platforms where user-generated content needs to be monitored for abusive language.

No commits in the last 6 months.

Use this if you need to automatically detect and categorize various forms of toxicity in online user comments to maintain a safe and positive community.

Not ideal if you require nuanced human-like interpretation of sarcasm or highly complex, subtle forms of online abuse, as automated systems have limitations.

content-moderation online-safety community-management social-media-monitoring text-analysis
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 13 / 25

How are scores calculated?

Stars

8

Forks

2

Language

Python

License

Last pushed

May 08, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/arunarn2/ToxicCommentChallenge"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.