walter-lead/toxic_comments

CNN and LSTM multi-label text classification

27
/ 100
Experimental

This project helps identify and categorize offensive content in user-generated text, such as comments or forum posts. You input a piece of text, and it outputs a score for different categories of toxicity (e.g., 'toxic', 'obscene', 'threat'). This is useful for content moderators, platform administrators, or community managers who need to maintain respectful online environments.

No commits in the last 6 months.

Use this if you need to automatically detect and flag various types of toxic language in text-based user contributions to ensure community guidelines are met.

Not ideal if you need a pre-packaged moderation system with advanced features like user banning, content removal, or human review workflows built-in.

content-moderation community-management online-safety social-media-analysis text-analytics
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

11

Forks

3

Language

Python

License

Last pushed

Apr 13, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/walter-lead/toxic_comments"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.