EliasCai/bert-toxicity-classification

bert on Jigsaw Unintended Bias in Toxicity Classification

34
/ 100
Emerging

This project helps developers evaluate and refine AI models designed to flag toxic comments and conversations. It takes raw text data, trains a BERT model, and outputs predictions on whether text is toxic, along with a submission file for evaluation platforms. AI/ML developers or researchers working on content moderation or online safety applications would find this useful.

No commits in the last 6 months.

Use this if you are an AI/ML developer working to train and benchmark models for detecting toxicity in text.

Not ideal if you are a non-developer seeking an out-of-the-box solution for content moderation or needing to understand toxicity without building a model.

content-moderation natural-language-processing machine-learning-engineering text-classification
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

50

Forks

15

Language

Python

License

Last pushed

Apr 07, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/EliasCai/bert-toxicity-classification"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.