ebagdasa/backdoors101

Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.

49
/ 100
Emerging

This framework helps machine learning researchers and security analysts evaluate the robustness of deep learning models against malicious 'backdoor' attacks. It allows you to simulate various types of backdoor attacks by feeding specific inputs and observing manipulated outputs, and then test different defense mechanisms. Researchers in AI security and trust can use this to understand vulnerabilities and develop stronger, more secure AI systems.

378 stars. No commits in the last 6 months.

Use this if you are a machine learning security researcher or practitioner investigating how deep learning models can be compromised through backdoors, and you need a tool to easily implement and test different attack and defense strategies.

Not ideal if you are looking for a plug-and-play solution to automatically secure an existing production model without needing to understand the underlying attack vectors or defense mechanisms.

AI Security Machine Learning Robustness Adversarial Machine Learning Federated Learning Security Model Vulnerability Testing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

378

Forks

85

Language

Python

License

MIT

Last pushed

Feb 05, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ebagdasa/backdoors101"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.