Megum1/BEAGLE

[NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense

31
/ 100
Emerging

BEAGLE helps AI security researchers and MLOps engineers identify and remove hidden 'backdoors' in deep learning models. It takes a few examples of a compromised model or poisoned data and reveals the specific trigger patterns used in a backdoor attack. The output is a synthesized scanner to detect the backdoor or a hardened model resistant to the attack.

No commits in the last 6 months.

Use this if you need to perform forensic analysis on a deep learning model to understand if it has been maliciously tampered with, and then either remove the backdoor or create a detection tool.

Not ideal if you are looking for a general-purpose anomaly detection system for model integrity without specific knowledge or examples of a potential backdoor.

AI security MLOps deep learning forensics model hardening threat detection
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

17

Forks

2

Language

Python

License

MIT

Last pushed

May 07, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Megum1/BEAGLE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.