candacelax/bias-in-vision-and-language

Code for paper "Measuring Social Biases in Grounded Vision and Language Embeddings"

26
/ 100
Experimental

This project helps researchers and ethical AI practitioners identify social biases within artificial intelligence models that understand both images and text. You provide sets of images and associated words, and the tool evaluates if the AI model exhibits biased associations (e.g., linking certain demographics to specific professions). The output indicates the strength and nature of these biases in the model's understanding. It's designed for those evaluating fairness in multimodal AI systems.

No commits in the last 6 months.

Use this if you need to measure and quantify social biases in visually grounded language models like ViLBERT or VisualBERT, especially when working with custom image and text datasets.

Not ideal if you are looking for an out-of-the-box solution to debias an existing AI model, as this project focuses on measurement rather than mitigation.

AI-ethics bias-detection multimodal-AI fairness-assessment computational-social-science
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 13 / 25

How are scores calculated?

Stars

9

Forks

2

Language

Shell

License

Last pushed

Oct 08, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/candacelax/bias-in-vision-and-language"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.