foolbox and AdvBox

These are competitors—both provide overlapping functionality for generating adversarial examples across multiple deep learning frameworks, with Foolbox offering broader adoption and maintenance while AdvBox provides additional robustness benchmarking capabilities, but users would typically select one as their primary adversarial attack toolkit rather than use both together.

foolbox
64
Established
AdvBox
50
Established
Maintenance 6/25
Adoption 11/25
Maturity 25/25
Community 22/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 24/25
Stars: 2,946
Forks: 437
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 1,412
Forks: 268
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: Apache-2.0
No risk flags
Stale 6m No Package No Dependents

About foolbox

bethgelab/foolbox

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

This is a Python library that helps machine learning researchers and engineers evaluate the resilience of their AI models against adversarial attacks. It takes an existing machine learning model and generates 'adversarial examples'—slightly altered inputs designed to fool the model. The output shows how easily the model can be tricked, helping developers build more robust AI systems.

AI-robustness machine-learning-security model-evaluation deep-learning-research adversarial-testing

About AdvBox

advboxes/AdvBox

Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.

This tool helps AI engineers and security researchers evaluate the robustness of AI models. It generates 'adversarial examples'—slightly altered inputs that fool neural networks—and can also detect these deceptive inputs. You provide your AI model and data, and it outputs adversarial examples or insights into your model's vulnerabilities.

AI Security Model Robustness Adversarial AI Deepfake Detection Computer Vision Security

Scores updated daily from GitHub, PyPI, and npm data. How scores work