ByungKwanLee/Masking-Adversarial-Damage

[CVPR 2022] Official PyTorch Implementation for "Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network"

34
/ 100
Emerging

This project helps machine learning engineers and researchers optimize deep learning models used in image recognition for real-world reliability. It takes a pre-trained neural network and image datasets, then identifies and removes parts of the model that are vulnerable to adversarial attacks, resulting in a smaller, more robust model that maintains its accuracy against malicious inputs. This is useful for anyone deploying computer vision systems where security and performance are critical.

No commits in the last 6 months.

Use this if you need to make your image classification models more resilient to adversarial attacks while also reducing their size for efficiency.

Not ideal if your primary concern is solely model accuracy without regard for adversarial robustness or model compression.

adversarial-robustness model-compression image-recognition deep-learning-security neural-network-pruning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

32

Forks

4

Language

Python

License

MIT

Last pushed

Mar 13, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ByungKwanLee/Masking-Adversarial-Damage"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.