VinAIResearch/Warping-based_Backdoor_Attack-release
WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)
This project offers a novel method for creating 'backdoor' attacks on image classification systems. It takes a dataset of images (like traffic signs or faces) and a trained image classifier, then introduces subtle, nearly imperceptible distortions to some images. The output is a modified classifier that appears to perform normally but will misclassify these 'backdoored' images in a way chosen by the attacker. This is designed for researchers in AI safety and security looking to understand and test vulnerabilities in computer vision models.
136 stars. No commits in the last 6 months.
Use this if you are a machine learning security researcher investigating how image classification models can be subtly compromised by 'backdoor' attacks that are hard to detect.
Not ideal if you are looking to defend against common, overt adversarial attacks or to improve the general robustness of your image classification models against noise.
Stars
136
Forks
21
Language
Python
License
AGPL-3.0
Category
Last pushed
Nov 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/VinAIResearch/Warping-based_Backdoor_Attack-release"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model...
ebagdasa/backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct...
THUYimingLi/backdoor-learning-resources
A list of backdoor learning resources
zhangzp9970/MIA
Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence...
LukasStruppek/Plug-and-Play-Attacks
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and...