ogtal/A-ttack
Dette repository indeholder kode og modelvægtene til A&ttack algortimen.
This tool helps identify hate speech and abusive language in short Danish texts, such as social media comments. You provide a Danish text, and it tells you if the text contains stigmatizing, demeaning, offensive, harassing, or threatening remarks. It is designed for researchers, journalists, or anyone monitoring public discourse for harmful language.
No commits in the last 6 months.
Use this if you need to automatically detect instances of verbal attacks or hate speech in Danish text snippets from public discussions, like those found on social media.
Not ideal if you need to analyze longer documents, detect more nuanced forms of negativity, or work with languages other than Danish.
Stars
8
Forks
—
Language
Python
License
—
Category
Last pushed
May 31, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/ogtal/A-ttack"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unitaryai/detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built...
kensk8er/chicksexer
A Python package for gender classification.
Infinitode/ValX
ValX is an open-source Python package for text cleaning tasks, including profanity detection and...
PavelOstyakov/toxic
Toxic Comment Classification Challenge
minerva-ml/open-solution-toxic-comments
Open solution to the Toxic Comment Classification Challenge