rafelps/HLE-UPC-SemEval-2021-ToxicSpansDetection
HLE-UPC at SemEval-2021 Task 5: Toxic Spans Detection
This project helps content moderators and online community managers identify specific phrases or words within text that contribute to its toxicity. You provide a text input, and the tool highlights the exact 'toxic spans' that make the content harmful. It's designed for anyone needing to pinpoint and address toxicity in written online content.
No commits in the last 6 months.
Use this if you need to precisely locate and understand which parts of a sentence or message are considered toxic.
Not ideal if you only need a general classification of whether an entire text is toxic or not.
Stars
8
Forks
—
Language
Python
License
—
Category
Last pushed
Nov 26, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rafelps/HLE-UPC-SemEval-2021-ToxicSpansDetection"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
StyrbjornKall/TRIDENT
A collection of transformer-based models and developmental scripts presented in the publication...
Nithin-Holla/meme_challenge
Repository containing code from team Kingsterdam for the Hateful Memes Challenge
viddexa/moderators
One package to moderate them all
jaygala24/fed-hate-speech
The official code repository for the paper titled "A Federated Approach for Hate Speech...
richouzo/hate-speech-detection-survey
Trained Neural Networks (LSTM, HybridCNN/LSTM, PyramidCNN, Transformers, etc.) & comparison for...