nebula-beta/awesome-adversarial-deep-learning

A list of awesome resources for adversarial attack and defense method in deep learning

29
/ 100
Experimental

This resource curates research papers and tools related to adversarial examples in deep learning. It helps security researchers and machine learning engineers understand how to create (attacks) and prevent (defenses) subtle manipulations that can fool AI models. The output provides knowledge and code links to make AI systems more robust against malicious inputs.

132 stars. No commits in the last 6 months.

Use this if you are a security researcher or machine learning engineer concerned with the robustness and security of deep learning models, especially in computer vision.

Not ideal if you are a beginner looking for an introduction to deep learning or general machine learning best practices, as this focuses on a niche security aspect.

AI-security machine-learning-robustness computer-vision AI-safety deep-learning-auditing
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 11 / 25

How are scores calculated?

Stars

132

Forks

11

Language

License

Last pushed

Jan 07, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nebula-beta/awesome-adversarial-deep-learning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.