foolbox and AdvBox
These are competitors—both provide overlapping functionality for generating adversarial examples across multiple deep learning frameworks, with Foolbox offering broader adoption and maintenance while AdvBox provides additional robustness benchmarking capabilities, but users would typically select one as their primary adversarial attack toolkit rather than use both together.
About foolbox
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
This is a Python library that helps machine learning researchers and engineers evaluate the resilience of their AI models against adversarial attacks. It takes an existing machine learning model and generates 'adversarial examples'—slightly altered inputs designed to fool the model. The output shows how easily the model can be tricked, helping developers build more robust AI systems.
About AdvBox
advboxes/AdvBox
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
This tool helps AI engineers and security researchers evaluate the robustness of AI models. It generates 'adversarial examples'—slightly altered inputs that fool neural networks—and can also detect these deceptive inputs. You provide your AI model and data, and it outputs adversarial examples or insights into your model's vulnerabilities.
Scores updated daily from GitHub, PyPI, and npm data. How scores work