bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
This is a Python library that helps machine learning researchers and engineers evaluate the resilience of their AI models against adversarial attacks. It takes an existing machine learning model and generates 'adversarial examples'—slightly altered inputs designed to fool the model. The output shows how easily the model can be tricked, helping developers build more robust AI systems.
2,946 stars. Used by 1 other package. Available on PyPI.
Use this if you are a machine learning developer or researcher needing to benchmark the robustness of your neural networks against various adversarial attacks.
Not ideal if you are an end-user of an AI application and not directly involved in developing or evaluating the machine learning models themselves.
Stars
2,946
Forks
437
Language
Python
License
MIT
Category
Last pushed
Dec 03, 2025
Commits (30d)
0
Dependencies
7
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/bethgelab/foolbox"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research
advboxes/AdvBox
Advbox is a toolbox to generate adversarial examples that fool neural networks in...