tao-bai/attack-and-defense-methods
A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
This list helps AI researchers and security engineers understand how to make machine learning models more secure. It provides a comprehensive collection of academic papers covering methods to create "adversarial examples" that trick AI, and techniques to defend against such attacks. Anyone working to build robust and trustworthy AI systems would find this resource valuable.
212 stars. No commits in the last 6 months.
Use this if you need to research the latest methods for both attacking and defending machine learning models, especially in areas like computer vision.
Not ideal if you are looking for introductory material on machine learning or practical code implementations for security.
Stars
212
Forks
27
Language
TeX
License
MIT
Category
Last pushed
May 27, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tao-bai/attack-and-defense-methods"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research