princeton-nlp/MABEL
EMNLP 2022: "MABEL: Attenuating Gender Bias using Textual Entailment Data" https://arxiv.org/abs/2210.14975
This project helps machine learning practitioners or researchers build or evaluate language models that are more fair by reducing gender bias. It takes existing natural language inference data and uses it to fine-tune pre-trained language models, producing new models with attenuated gender bias while maintaining strong performance on various language understanding tasks. The ideal users are those concerned with fairness in AI applications.
No commits in the last 6 months.
Use this if you are developing or evaluating natural language processing systems and need to ensure they exhibit less gender bias in their predictions or representations.
Not ideal if your primary concern is solely maximizing performance on standard language understanding benchmarks without considering fairness, or if you are not working with language models.
Stars
38
Forks
2
Language
Python
License
MIT
Category
Last pushed
Dec 14, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/princeton-nlp/MABEL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
dccuchile/wefe
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes...
dreji18/Fairness-in-AI
Detecting Bias and ensuring Fairness in AI solutions
amazon-science/bold
Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language...
dhfbk/variationist
Variationist: Exploring Multifaceted Variation and Bias in Written Language Data (ACL 2024 demo track)
soarsmu/BiasFinder
BiasFinder | IEEE TSE | Metamorphic Test Generation to Uncover Bias for Sentiment Analysis Systems