google-research/active-adversarial-tests

Official implementation of the paper "Increasing Confidence in Adversarial Robustness Evaluations"

36
/ 100
Emerging

This tool helps machine learning engineers and researchers assess the reliability of AI models, especially when facing deliberate attacks. It takes an existing AI model and information about its security defenses as input. It then runs sophisticated tests to produce a more trustworthy evaluation of how well the model resists adversarial attacks, helping you understand its true robustness.

Use this if you need to rigorously test the adversarial robustness of your machine learning models and want to ensure the evaluation results are highly dependable.

Not ideal if you are looking for a general-purpose tool to develop or train AI models, as this is specifically for advanced robustness evaluation.

AI model security Adversarial machine learning Model robustness Machine learning evaluation AI safety
No License No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 12 / 25

How are scores calculated?

Stars

20

Forks

3

Language

Python

License

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/google-research/active-adversarial-tests"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.