walter-lead/toxic_comments
CNN and LSTM multi-label text classification
This project helps identify and categorize offensive content in user-generated text, such as comments or forum posts. You input a piece of text, and it outputs a score for different categories of toxicity (e.g., 'toxic', 'obscene', 'threat'). This is useful for content moderators, platform administrators, or community managers who need to maintain respectful online environments.
No commits in the last 6 months.
Use this if you need to automatically detect and flag various types of toxic language in text-based user contributions to ensure community guidelines are met.
Not ideal if you need a pre-packaged moderation system with advanced features like user banning, content removal, or human review workflows built-in.
Stars
11
Forks
3
Language
Python
License
—
Category
Last pushed
Apr 13, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/walter-lead/toxic_comments"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
yongzhuo/Keras-TextClassification
中文长文本分类、短句子分类、多标签分类、两句子相似度(Chinese Text Classification of Keras NLP, multi-label classify, or...
Priberam/SentimentAnalysis
Sentiment Analysis: Deep Bi-LSTM+attention model
dirkhovy/text_analysis_for_social_science
Code for the CUP Elements on text analysis in Python for social scientists
melihbodur/Text_and_Audio_classification_with_Bert
Text Classification in Turkish Texts with Bert
abhilash1910/MiniClassifier
Deep Learning Library for Text Classification.