g8a9/ear

Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"

34
/ 100
Emerging

This tool helps researchers and practitioners in natural language processing (NLP) build fairer, less biased models for tasks like hate speech detection. It takes a pre-trained language model and training data, then adds a special regularization technique during training. The outcome is a fine-tuned model that is more robust and less likely to misclassify content due to unintended biases related to specific identity terms. It's designed for those developing or deploying text-based AI.

No commits in the last 6 months.

Use this if you are developing AI models for text analysis, such as hate speech detection, and need to reduce unintended biases that can arise from models overfitting to specific words related to identity.

Not ideal if you are looking for a pre-packaged, ready-to-use API for bias detection without any model training or customization.

Natural-Language-Processing Bias-Mitigation Hate-Speech-Detection AI-Ethics Text-Analytics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

50

Forks

5

Language

Python

License

MIT

Last pushed

May 31, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/g8a9/ear"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.