LukasStruppek/Exploiting-Cultural-Biases-via-Homoglyphs

[Journal of Artificial Intelligence Research] Source code for our paper "Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis".

32
/ 100
Emerging

This project helps AI researchers and developers understand and mitigate hidden cultural biases in text-to-image models like DALL-E 2 or Stable Diffusion. It takes textual descriptions, optionally modified with non-Latin characters or homoglyphs, and generates images or analyzes model behavior to expose biases. The output includes biased images, quantitative bias scores, and a fine-tuned model robust against homoglyph attacks.

No commits in the last 6 months.

Use this if you are a researcher or developer working with text-to-image AI and want to detect, analyze, or unlearn cultural biases introduced by subtle textual manipulations.

Not ideal if you are looking for a general-purpose text-to-image generation tool or if you are not familiar with AI model training and evaluation.

AI-ethics bias-detection text-to-image-synthesis AI-safety model-robustness
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

12

Forks

2

Language

Python

License

MIT

Last pushed

Jan 08, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/LukasStruppek/Exploiting-Cultural-Biases-via-Homoglyphs"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.