candacelax/bias-in-vision-and-language
Code for paper "Measuring Social Biases in Grounded Vision and Language Embeddings"
This project helps researchers and ethical AI practitioners identify social biases within artificial intelligence models that understand both images and text. You provide sets of images and associated words, and the tool evaluates if the AI model exhibits biased associations (e.g., linking certain demographics to specific professions). The output indicates the strength and nature of these biases in the model's understanding. It's designed for those evaluating fairness in multimodal AI systems.
No commits in the last 6 months.
Use this if you need to measure and quantify social biases in visually grounded language models like ViLBERT or VisualBERT, especially when working with custom image and text datasets.
Not ideal if you are looking for an out-of-the-box solution to debias an existing AI model, as this project focuses on measurement rather than mitigation.
Stars
9
Forks
2
Language
Shell
License
—
Category
Last pushed
Oct 08, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/candacelax/bias-in-vision-and-language"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
dccuchile/wefe
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes...
dreji18/Fairness-in-AI
Detecting Bias and ensuring Fairness in AI solutions
amazon-science/bold
Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language...
dhfbk/variationist
Variationist: Exploring Multifaceted Variation and Bias in Written Language Data (ACL 2024 demo track)
soarsmu/BiasFinder
BiasFinder | IEEE TSE | Metamorphic Test Generation to Uncover Bias for Sentiment Analysis Systems