lionelmessi6410/ntga

Code for "Neural Tangent Generalization Attacks" (ICML 2021)

33
/ 100
Emerging

This project helps machine learning researchers and security analysts understand and generate 'poisoned' datasets that sabotage the generalization ability of deep learning models. It takes standard image datasets like MNIST or CIFAR-10 and outputs modified versions where subtle, imperceptible changes cause trained neural networks to perform poorly on new, unseen data, even while appearing to learn the training data perfectly. The primary user is someone researching adversarial attacks or model robustness.

No commits in the last 6 months.

Use this if you need to create training datasets that cause deep neural networks to fail at generalizing to new data, despite appearing to learn the training examples well.

Not ideal if you are looking to improve the generalization ability or robustness of your machine learning models, as this tool is designed to do the opposite.

adversarial-machine-learning deep-learning-security data-poisoning model-robustness machine-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

41

Forks

4

Language

Python

License

Apache-2.0

Last pushed

Jul 29, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/lionelmessi6410/ntga"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.