Harry24k/adversarial-attacks-pytorch
PyTorch implementation of adversarial attacks [torchattacks]
This tool helps machine learning engineers and researchers assess the robustness of their deep learning models. It takes an existing PyTorch model and input data (like images) and generates 'adversarial examples' — slightly modified inputs designed to trick the model. The output is a set of these adversarial examples, which can then be used to test how well the model resists subtle attacks.
2,147 stars. No commits in the last 6 months.
Use this if you are a deep learning engineer or researcher focused on model security and need to rigorously test your PyTorch models against various adversarial attacks to understand their vulnerabilities.
Not ideal if you are looking for a general-purpose machine learning library or if your models are not implemented in PyTorch, as this tool is specifically designed for PyTorch-based adversarial attacks.
Stars
2,147
Forks
369
Language
Python
License
MIT
Category
Last pushed
Jun 29, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Harry24k/adversarial-attacks-pytorch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research