princeton-nlp/MABEL

EMNLP 2022: "MABEL: Attenuating Gender Bias using Textual Entailment Data" https://arxiv.org/abs/2210.14975

29
/ 100
Experimental

This project helps machine learning practitioners or researchers build or evaluate language models that are more fair by reducing gender bias. It takes existing natural language inference data and uses it to fine-tune pre-trained language models, producing new models with attenuated gender bias while maintaining strong performance on various language understanding tasks. The ideal users are those concerned with fairness in AI applications.

No commits in the last 6 months.

Use this if you are developing or evaluating natural language processing systems and need to ensure they exhibit less gender bias in their predictions or representations.

Not ideal if your primary concern is solely maximizing performance on standard language understanding benchmarks without considering fairness, or if you are not working with language models.

AI fairness natural language processing machine learning research ethical AI bias mitigation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

38

Forks

2

Language

Python

License

MIT

Last pushed

Dec 14, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/princeton-nlp/MABEL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.