sevdaimany/NLP-Toxic-Comment-Classification

Natural Language Processing: Toxic Comments Detection and Classification

14
/ 100
Experimental

This helps moderators and community managers automatically identify and flag toxic comments in user-generated content. You feed it raw text comments, and it tells you if a comment is toxic or clean. This is designed for anyone managing online communities or platforms where user comments need to be screened.

No commits in the last 6 months.

Use this if you need a quick way to filter large volumes of text comments for toxicity.

Not ideal if you require a nuanced understanding of intent or highly specific categories of harmful speech beyond basic toxicity.

content-moderation community-management online-safety text-screening user-generated-content
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

15

Forks

Language

Python

License

Last pushed

Jul 20, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/sevdaimany/NLP-Toxic-Comment-Classification"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.