JJ-Vice/BAGM

All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models.

32
/ 100
Emerging

This project helps evaluate the security vulnerabilities of text-to-image generative AI models. It introduces 'backdoor attacks' that can subtly manipulate what images these AI models produce when given specific text prompts. AI security researchers or digital marketing professionals concerned about AI manipulation would use this to test and understand potential biases or unwanted outputs.

No commits in the last 6 months.

Use this if you need to understand how text-to-image generative AI models can be manipulated to produce biased or targeted outputs.

Not ideal if you are looking to create new generative AI models or enhance their image generation capabilities without a focus on security evaluation.

AI security digital marketing generative AI model vulnerability bias detection
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

Jupyter Notebook

License

MIT

Last pushed

Sep 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/JJ-Vice/BAGM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.