LCS2-IIITD/Hate_Norm
[KDD 2022] Proactively Reducing the Hate Intensity of Online Posts via Hate Speech Normalization
This tool helps social media managers, community moderators, and online platform administrators proactively reduce the intensity of hate speech in online posts. You input raw, potentially hateful text, and it identifies hateful segments, then provides a 'normalized' version of the text with reduced hate intensity. This helps maintain healthier online communities.
No commits in the last 6 months.
Use this if you need to automatically identify and transform hateful content in user-generated text into a more acceptable, less intense form before or after publication.
Not ideal if you require real-time, high-volume content moderation for live streams or extremely short-form content where immediate, nuanced human judgment is paramount.
Stars
10
Forks
1
Language
Jupyter Notebook
License
MIT
Category
Last pushed
May 08, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/LCS2-IIITD/Hate_Norm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unitaryai/detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built...
kensk8er/chicksexer
A Python package for gender classification.
Infinitode/ValX
ValX is an open-source Python package for text cleaning tasks, including profanity detection and...
PavelOstyakov/toxic
Toxic Comment Classification Challenge
minerva-ml/open-solution-toxic-comments
Open solution to the Toxic Comment Classification Challenge