hate-alert/Tutorial-Resources
Resources and tools for the Tutorial - "Hate speech detection, mitigation and beyond" presented at ICWSM 2021
This project helps social media platforms and content moderators automatically identify and flag hate speech in user-generated content across multiple languages. You input text, and the system outputs classifications of whether it's abusive and highlights the specific problematic phrases. This tool is designed for social scientists and content moderation teams looking to scale their efforts against online toxicity.
No commits in the last 6 months.
Use this if you need to automatically detect hate speech, identify the abusive parts of text, or find counter-speech in multilingual social media content.
Not ideal if you're looking for a fully autonomous moderation solution without human oversight, as these models may carry biases and require careful application.
Stars
39
Forks
7
Language
Python
License
MIT
Category
Last pushed
Feb 23, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/hate-alert/Tutorial-Resources"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unitaryai/detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built...
kensk8er/chicksexer
A Python package for gender classification.
Infinitode/ValX
ValX is an open-source Python package for text cleaning tasks, including profanity detection and...
PavelOstyakov/toxic
Toxic Comment Classification Challenge
minerva-ml/open-solution-toxic-comments
Open solution to the Toxic Comment Classification Challenge