hate-alert/HateALERT-EVALITA
Code for replicating results of team 'hateminers' at EVALITA-2018 for AMI task
This project helps social media analysts, content moderators, and researchers identify and categorize hate speech targeting women in online text. You provide raw text data, and it classifies whether the text contains misogynistic content. If it does, it further categorizes the specific type of misogyny.
No commits in the last 6 months.
Use this if you need to automatically detect and classify misogynistic hate speech in large volumes of text, especially in English.
Not ideal if you need to detect hate speech in languages other than English or if you require real-time content moderation at scale without custom integration.
Stars
13
Forks
3
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Mar 02, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/hate-alert/HateALERT-EVALITA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unitaryai/detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built...
kensk8er/chicksexer
A Python package for gender classification.
Infinitode/ValX
ValX is an open-source Python package for text cleaning tasks, including profanity detection and...
PavelOstyakov/toxic
Toxic Comment Classification Challenge
minerva-ml/open-solution-toxic-comments
Open solution to the Toxic Comment Classification Challenge