EliasCai/bert-toxicity-classification
bert on Jigsaw Unintended Bias in Toxicity Classification
This project helps developers evaluate and refine AI models designed to flag toxic comments and conversations. It takes raw text data, trains a BERT model, and outputs predictions on whether text is toxic, along with a submission file for evaluation platforms. AI/ML developers or researchers working on content moderation or online safety applications would find this useful.
No commits in the last 6 months.
Use this if you are an AI/ML developer working to train and benchmark models for detecting toxicity in text.
Not ideal if you are a non-developer seeking an out-of-the-box solution for content moderation or needing to understand toxicity without building a model.
Stars
50
Forks
15
Language
Python
License
—
Category
Last pushed
Apr 07, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/EliasCai/bert-toxicity-classification"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unitaryai/detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built...
kensk8er/chicksexer
A Python package for gender classification.
Infinitode/ValX
ValX is an open-source Python package for text cleaning tasks, including profanity detection and...
PavelOstyakov/toxic
Toxic Comment Classification Challenge
minerva-ml/open-solution-toxic-comments
Open solution to the Toxic Comment Classification Challenge