andiosika/NLP-to-identify-toxic-or-abusive-language-for-online-conversation-using-Keras-Deep-Learning-Models
Natural Language Processing: A multi-headed model capable of detecting different types of online discussion toxicity like threats, obscenity, insults, and identity-based hate using Keras RNN LSTM and focal loss to address a hyper-imbalanced dataset.
This helps online communities, social media platforms, and content moderators automatically identify and flag toxic language. By analyzing incoming text, it classifies comments into categories like threats, obscenity, insults, and identity-based hate. Community managers and moderation teams can use this to maintain healthier online discussions.
No commits in the last 6 months.
Use this if you need to detect and categorize various forms of abusive or toxic language within online conversations to improve community standards.
Not ideal if you require highly accurate detection for very rare categories like threats or identity-based hate, as its performance for these specific types can be limited.
Stars
10
Forks
2
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 17, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/andiosika/NLP-to-identify-toxic-or-abusive-language-for-online-conversation-using-Keras-Deep-Learning-Models"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unitaryai/detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built...
kensk8er/chicksexer
A Python package for gender classification.
Infinitode/ValX
ValX is an open-source Python package for text cleaning tasks, including profanity detection and...
PavelOstyakov/toxic
Toxic Comment Classification Challenge
minerva-ml/open-solution-toxic-comments
Open solution to the Toxic Comment Classification Challenge