BorealisAI/advertorch

A Toolbox for Adversarial Robustness Research

58
/ 100
Established

This tool helps machine learning researchers evaluate and improve the security of their deep learning models against adversarial attacks. It takes an existing PyTorch model and data, and then generates 'adversarial examples'—slightly modified inputs designed to fool the model—or applies defenses to make the model more robust. It's for researchers focused on making AI systems more reliable and trustworthy in the face of malicious inputs.

1,367 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.

Use this if you are a machine learning researcher working with PyTorch and need to test your model's vulnerability to adversarial attacks or develop defenses against them.

Not ideal if you are a practitioner looking for a general-purpose machine learning library or if your models are not built with PyTorch.

AI Security Deep Learning Robustness Adversarial Machine Learning Model Evaluation Machine Learning Research
Stale 6m
Maintenance 0 / 25
Adoption 11 / 25
Maturity 25 / 25
Community 22 / 25

How are scores calculated?

Stars

1,367

Forks

201

Language

Jupyter Notebook

License

LGPL-3.0

Last pushed

Sep 14, 2023

Commits (30d)

0

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/BorealisAI/advertorch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.