ebagdasa/backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
This framework helps machine learning researchers and security analysts evaluate the robustness of deep learning models against malicious 'backdoor' attacks. It allows you to simulate various types of backdoor attacks by feeding specific inputs and observing manipulated outputs, and then test different defense mechanisms. Researchers in AI security and trust can use this to understand vulnerabilities and develop stronger, more secure AI systems.
378 stars. No commits in the last 6 months.
Use this if you are a machine learning security researcher or practitioner investigating how deep learning models can be compromised through backdoors, and you need a tool to easily implement and test different attack and defense strategies.
Not ideal if you are looking for a plug-and-play solution to automatically secure an existing production model without needing to understand the underlying attack vectors or defense mechanisms.
Stars
378
Forks
85
Language
Python
License
MIT
Category
Last pushed
Feb 05, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ebagdasa/backdoors101"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model...
THUYimingLi/backdoor-learning-resources
A list of backdoor learning resources
zhangzp9970/MIA
Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence...
LukasStruppek/Plug-and-Play-Attacks
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and...
VinAIResearch/Warping-based_Backdoor_Attack-release
WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)