nishiwen1214/AT_Papers
Must-read papers on Adversarial training for neural networks!
This is a curated collection of must-read academic papers focused on making neural network models more robust against adversarial attacks. It provides a list of research articles, some with code links, for practitioners working to build reliable AI systems. Researchers, machine learning engineers, and data scientists looking to enhance the security and resilience of their models against deliberately crafted input distortions would find this beneficial.
No commits in the last 6 months.
Use this if you are a machine learning researcher or engineer looking for foundational and recent literature on adversarial training to improve model robustness.
Not ideal if you are looking for ready-to-use code implementations or a high-level overview without diving into academic papers.
Stars
12
Forks
1
Language
—
License
—
Category
Last pushed
Oct 16, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/nishiwen1214/AT_Papers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
thunlp/OpenAttack
An Open-Source Package for Textual Adversarial Attack.
thunlp/TAADpapers
Must-read Papers on Textual Adversarial Attack and Defense
jind11/TextFooler
A Model for Natural Language Attack on Text Classification and Inference
thunlp/OpenBackdoor
An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)
thunlp/SememePSO-Attack
Code and data of the ACL 2020 paper "Word-level Textual Adversarial Attacking as Combinatorial...