feyzaakyurek/bbnli

Bias Benchmark for Natural Language Inference. Code repo for the Findings of NAACL 2022 paper "On Measuring Social Biases in Prompt-Based Multi-Task Learning".

27
/ 100
Experimental

This tool helps researchers and developers evaluate large language models for social biases related to gender, race, and religion. You input a set of premises and hypotheses, and it outputs bias scores indicating how much the model's inferences reflect societal biases. It is designed for those who work with or develop AI language models and need to ensure fairness.

No commits in the last 6 months.

Use this if you are developing or fine-tuning large language models and need to systematically measure and understand their inherent social biases.

Not ideal if you are looking for a general-purpose natural language inference tool or a solution for detecting bias in human-generated text.

AI ethics NLP research bias detection language model evaluation fairness in AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

15

Forks

1

Language

Python

License

MIT

Last pushed

Apr 28, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/feyzaakyurek/bbnli"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.