bethgelab/foolbox

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

64
/ 100
Established

This is a Python library that helps machine learning researchers and engineers evaluate the resilience of their AI models against adversarial attacks. It takes an existing machine learning model and generates 'adversarial examples'—slightly altered inputs designed to fool the model. The output shows how easily the model can be tricked, helping developers build more robust AI systems.

2,946 stars. Used by 1 other package. Available on PyPI.

Use this if you are a machine learning developer or researcher needing to benchmark the robustness of your neural networks against various adversarial attacks.

Not ideal if you are an end-user of an AI application and not directly involved in developing or evaluating the machine learning models themselves.

AI-robustness machine-learning-security model-evaluation deep-learning-research adversarial-testing
Maintenance 6 / 25
Adoption 11 / 25
Maturity 25 / 25
Community 22 / 25

How are scores calculated?

Stars

2,946

Forks

437

Language

Python

License

MIT

Last pushed

Dec 03, 2025

Commits (30d)

0

Dependencies

7

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/bethgelab/foolbox"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.