cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
This tool helps machine learning engineers and researchers assess how secure their AI models are against 'adversarial examples.' It allows you to feed trained models and data, then generates modified inputs (adversarial attacks) and helps you test defenses. The output includes performance metrics showing how well the model withstands these attacks.
6,425 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Use this if you are a machine learning practitioner who needs to understand and improve the robustness of your AI models against subtle, malicious data inputs.
Not ideal if you are looking for a general-purpose machine learning library or if your focus is not on model security against adversarial attacks.
Stars
6,425
Forks
1,399
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Apr 10, 2024
Commits (30d)
0
Dependencies
11
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/cleverhans-lab/cleverhans"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research
advboxes/AdvBox
Advbox is a toolbox to generate adversarial examples that fool neural networks in...