lionelmessi6410/ntga
Code for "Neural Tangent Generalization Attacks" (ICML 2021)
This project helps machine learning researchers and security analysts understand and generate 'poisoned' datasets that sabotage the generalization ability of deep learning models. It takes standard image datasets like MNIST or CIFAR-10 and outputs modified versions where subtle, imperceptible changes cause trained neural networks to perform poorly on new, unseen data, even while appearing to learn the training data perfectly. The primary user is someone researching adversarial attacks or model robustness.
No commits in the last 6 months.
Use this if you need to create training datasets that cause deep neural networks to fail at generalizing to new data, despite appearing to learn the training examples well.
Not ideal if you are looking to improve the generalization ability or robustness of your machine learning models, as this tool is designed to do the opposite.
Stars
41
Forks
4
Language
Python
License
Apache-2.0
Category
Last pushed
Jul 29, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/lionelmessi6410/ntga"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research