nishiwen1214/AT_Papers

Must-read papers on Adversarial training for neural networks!

19
/ 100
Experimental

This is a curated collection of must-read academic papers focused on making neural network models more robust against adversarial attacks. It provides a list of research articles, some with code links, for practitioners working to build reliable AI systems. Researchers, machine learning engineers, and data scientists looking to enhance the security and resilience of their models against deliberately crafted input distortions would find this beneficial.

No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer looking for foundational and recent literature on adversarial training to improve model robustness.

Not ideal if you are looking for ready-to-use code implementations or a high-level overview without diving into academic papers.

AI robustness neural network security machine learning research adversarial defense model reliability
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 6 / 25

How are scores calculated?

Stars

12

Forks

1

Language

License

Last pushed

Oct 16, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/nishiwen1214/AT_Papers"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.