fiveai/GFCS

Code for the ICLR 2022 paper "Attacking deep networks with surrogate-based adversarial black-box methods is easy"

34
/ 100
Emerging

This project helps machine learning security researchers and adversarial AI specialists evaluate the robustness of deep neural networks. It takes an image dataset and a trained black-box model as input, then generates adversarial examples that can fool the model. The output helps users understand how easily their models might be attacked without direct access to the model's internal workings.

Use this if you need to test the vulnerability of deep learning models to black-box adversarial attacks, particularly for image classification tasks.

Not ideal if you are looking for a general-purpose adversarial training library or if you need to evaluate white-box attack methods.

adversarial-machine-learning deep-learning-security model-robustness black-box-attacks AI-safety
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Python

License

Last pushed

Oct 16, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/fiveai/GFCS"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.