techn0man1ac/ToxicCommentClassification
In the modern era of social media, toxicity in online comments poses a significant challenge, creating a negative atmosphere for communication. From abuse to insults, toxic behavior discourages the free exchange of thoughts and ideas among users. This project offers a solution to this problem.
This project helps online community managers and social media platforms automatically identify and classify harmful user comments. It takes raw text comments as input and determines if they contain different types or levels of toxicity, such as insults, threats, or hate speech. The output empowers moderators to quickly address problematic content and foster a more positive online environment.
Use this if you manage online communities or social media platforms and need to automatically detect and categorize toxic comments to improve user experience.
Not ideal if you need a solution for very niche or highly nuanced forms of online harm that require deep contextual understanding beyond standard toxicity categories.
Stars
10
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Nov 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/techn0man1ac/ToxicCommentClassification"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
zake7749/DeepToxic
top 1% solution to toxic comment classification challenge on Kaggle.
DenisIndenbom/AntiToxicBot
AntiToxicBot is a bot that detects toxics in a chat using Data Science and Machine Learning...
aralroca/react-text-toxicity
Detect text toxicity in a simple way, using React. Based in a Keras model, loaded with Tensorflow.js.
bensonruan/Toxic-Comment-Classifier
Toxic-Comment-Classifier
charliegerard/safe-space
Github action that checks the toxicity level of comments and PR reviews to help make repos safe spaces.