cleverhans-lab/cleverhans

An adversarial example library for constructing attacks, building defenses, and benchmarking both

61
/ 100
Established

This tool helps machine learning engineers and researchers assess how secure their AI models are against 'adversarial examples.' It allows you to feed trained models and data, then generates modified inputs (adversarial attacks) and helps you test defenses. The output includes performance metrics showing how well the model withstands these attacks.

6,425 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.

Use this if you are a machine learning practitioner who needs to understand and improve the robustness of your AI models against subtle, malicious data inputs.

Not ideal if you are looking for a general-purpose machine learning library or if your focus is not on model security against adversarial attacks.

AI security machine learning robustness model testing adversarial machine learning deep learning defense
Stale 6m
Maintenance 0 / 25
Adoption 11 / 25
Maturity 25 / 25
Community 25 / 25

How are scores calculated?

Stars

6,425

Forks

1,399

Language

Jupyter Notebook

License

MIT

Last pushed

Apr 10, 2024

Commits (30d)

0

Dependencies

11

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/cleverhans-lab/cleverhans"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.