VinAIResearch/Warping-based_Backdoor_Attack-release

WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)

43
/ 100
Emerging

This project offers a novel method for creating 'backdoor' attacks on image classification systems. It takes a dataset of images (like traffic signs or faces) and a trained image classifier, then introduces subtle, nearly imperceptible distortions to some images. The output is a modified classifier that appears to perform normally but will misclassify these 'backdoored' images in a way chosen by the attacker. This is designed for researchers in AI safety and security looking to understand and test vulnerabilities in computer vision models.

136 stars. No commits in the last 6 months.

Use this if you are a machine learning security researcher investigating how image classification models can be subtly compromised by 'backdoor' attacks that are hard to detect.

Not ideal if you are looking to defend against common, overt adversarial attacks or to improve the general robustness of your image classification models against noise.

AI security research Adversarial machine learning Image classification vulnerabilities Computer vision safety Model robustness testing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

136

Forks

21

Language

Python

License

AGPL-3.0

Last pushed

Nov 11, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/VinAIResearch/Warping-based_Backdoor_Attack-release"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.