Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
This library helps machine learning developers and security researchers build and test AI systems that are resilient to malicious attacks. It provides tools to evaluate how robust a machine learning model is against threats like data poisoning or evasion, and to implement defenses. If you're building or securing AI applications, you can use it to test your models against various adversarial techniques and enhance their security.
5,886 stars. Used by 1 other package. Available on PyPI.
Use this if you are a machine learning developer or security professional concerned about the security and robustness of your AI models against adversarial attacks.
Not ideal if you are looking for general machine learning development tools unrelated to security or adversarial robustness.
Stars
5,886
Forks
1,296
Language
Python
License
MIT
Category
Last pushed
Dec 12, 2025
Commits (30d)
0
Dependencies
6
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Trusted-AI/adversarial-robustness-toolbox"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research
advboxes/AdvBox
Advbox is a toolbox to generate adversarial examples that fool neural networks in...