andiosika/NLP-to-identify-toxic-or-abusive-language-for-online-conversation-using-Keras-Deep-Learning-Models

Natural Language Processing: A multi-headed model capable of detecting different types of online discussion toxicity like threats, obscenity, insults, and identity-based hate using Keras RNN LSTM and focal loss to address a hyper-imbalanced dataset.

26
/ 100
Experimental

This helps online communities, social media platforms, and content moderators automatically identify and flag toxic language. By analyzing incoming text, it classifies comments into categories like threats, obscenity, insults, and identity-based hate. Community managers and moderation teams can use this to maintain healthier online discussions.

No commits in the last 6 months.

Use this if you need to detect and categorize various forms of abusive or toxic language within online conversations to improve community standards.

Not ideal if you require highly accurate detection for very rare categories like threats or identity-based hate, as its performance for these specific types can be limited.

content-moderation community-management online-safety social-media-management digital-ethics
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 13 / 25

How are scores calculated?

Stars

10

Forks

2

Language

Jupyter Notebook

License

Last pushed

Mar 17, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/andiosika/NLP-to-identify-toxic-or-abusive-language-for-online-conversation-using-Keras-Deep-Learning-Models"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.