nebula-beta/awesome-adversarial-deep-learning
A list of awesome resources for adversarial attack and defense method in deep learning
This resource curates research papers and tools related to adversarial examples in deep learning. It helps security researchers and machine learning engineers understand how to create (attacks) and prevent (defenses) subtle manipulations that can fool AI models. The output provides knowledge and code links to make AI systems more robust against malicious inputs.
132 stars. No commits in the last 6 months.
Use this if you are a security researcher or machine learning engineer concerned with the robustness and security of deep learning models, especially in computer vision.
Not ideal if you are a beginner looking for an introduction to deep learning or general machine learning best practices, as this focuses on a niche security aspect.
Stars
132
Forks
11
Language
—
License
—
Category
Last pushed
Jan 07, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nebula-beta/awesome-adversarial-deep-learning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research