Harry24k/adversarial-attacks-pytorch

PyTorch implementation of adversarial attacks [torchattacks]

50
/ 100
Established

This tool helps machine learning engineers and researchers assess the robustness of their deep learning models. It takes an existing PyTorch model and input data (like images) and generates 'adversarial examples' — slightly modified inputs designed to trick the model. The output is a set of these adversarial examples, which can then be used to test how well the model resists subtle attacks.

2,147 stars. No commits in the last 6 months.

Use this if you are a deep learning engineer or researcher focused on model security and need to rigorously test your PyTorch models against various adversarial attacks to understand their vulnerabilities.

Not ideal if you are looking for a general-purpose machine learning library or if your models are not implemented in PyTorch, as this tool is specifically designed for PyTorch-based adversarial attacks.

model security deep learning robustness computer vision AI safety adversarial machine learning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 24 / 25

How are scores calculated?

Stars

2,147

Forks

369

Language

Python

License

MIT

Last pushed

Jun 29, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Harry24k/adversarial-attacks-pytorch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.