Prakhar-FF13/Toxic-Comments-Classification

Predict the toxicity rating of comment made by the user.

37
/ 100
Emerging

This tool helps content moderators, community managers, and social media platforms automatically identify and flag toxic user comments. You provide a user-generated comment, and it outputs a numerical score indicating its toxicity level, helping maintain a safer online environment.

No commits in the last 6 months.

Use this if you need to quickly assess and moderate a large volume of user comments for toxicity, ranging from mild to severe.

Not ideal if you need a detailed breakdown of specific toxicity subtypes (e.g., threat, insult, sexual explicit) or explanations for why a comment was flagged.

content-moderation community-management online-safety social-media-management brand-reputation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 21 / 25

How are scores calculated?

Stars

49

Forks

33

Language

HTML

License

Last pushed

May 28, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Prakhar-FF13/Toxic-Comments-Classification"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.