Megum1/BEAGLE
[NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
BEAGLE helps AI security researchers and MLOps engineers identify and remove hidden 'backdoors' in deep learning models. It takes a few examples of a compromised model or poisoned data and reveals the specific trigger patterns used in a backdoor attack. The output is a synthesized scanner to detect the backdoor or a hardened model resistant to the attack.
No commits in the last 6 months.
Use this if you need to perform forensic analysis on a deep learning model to understand if it has been maliciously tampered with, and then either remove the backdoor or create a detection tool.
Not ideal if you are looking for a general-purpose anomaly detection system for model integrity without specific knowledge or examples of a potential backdoor.
Stars
17
Forks
2
Language
Python
License
MIT
Category
Last pushed
May 07, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Megum1/BEAGLE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model...
ebagdasa/backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct...
THUYimingLi/backdoor-learning-resources
A list of backdoor learning resources
zhangzp9970/MIA
Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence...
LukasStruppek/Plug-and-Play-Attacks
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and...