google-research/active-adversarial-tests
Official implementation of the paper "Increasing Confidence in Adversarial Robustness Evaluations"
This tool helps machine learning engineers and researchers assess the reliability of AI models, especially when facing deliberate attacks. It takes an existing AI model and information about its security defenses as input. It then runs sophisticated tests to produce a more trustworthy evaluation of how well the model resists adversarial attacks, helping you understand its true robustness.
Use this if you need to rigorously test the adversarial robustness of your machine learning models and want to ensure the evaluation results are highly dependable.
Not ideal if you are looking for a general-purpose tool to develop or train AI models, as this is specifically for advanced robustness evaluation.
Stars
20
Forks
3
Language
Python
License
—
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/google-research/active-adversarial-tests"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research