Trusted-AI/adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

67
/ 100
Established

This library helps machine learning developers and security researchers build and test AI systems that are resilient to malicious attacks. It provides tools to evaluate how robust a machine learning model is against threats like data poisoning or evasion, and to implement defenses. If you're building or securing AI applications, you can use it to test your models against various adversarial techniques and enhance their security.

5,886 stars. Used by 1 other package. Available on PyPI.

Use this if you are a machine learning developer or security professional concerned about the security and robustness of your AI models against adversarial attacks.

Not ideal if you are looking for general machine learning development tools unrelated to security or adversarial robustness.

AI security machine learning robustness adversarial AI model defense AI red teaming
Maintenance 6 / 25
Adoption 11 / 25
Maturity 25 / 25
Community 25 / 25

How are scores calculated?

Stars

5,886

Forks

1,296

Language

Python

License

MIT

Last pushed

Dec 12, 2025

Commits (30d)

0

Dependencies

6

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Trusted-AI/adversarial-robustness-toolbox"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.