SewoongLab/spectre-defense

Defending Against Backdoor Attacks Using Robust Covariance Estimation

38
/ 100
Emerging

This project helps machine learning researchers and security analysts protect image classification models from 'backdoor' attacks. When a model is trained on a poisoned dataset, malicious patterns can be hidden that trigger incorrect predictions later. This tool takes a trained, potentially backdoored image classifier and its hidden data representations, then identifies and removes the poisoned samples so the model can be retrained securely.

No commits in the last 6 months.

Use this if you are a machine learning security researcher or practitioner working with image classification models that might have been compromised by data poisoning or backdoor attacks.

Not ideal if you are not working with image data, do not have access to the hidden representations of your neural network, or are looking for defenses against types of attacks other than backdoors.

AI-security machine-learning-defense data-poisoning backdoor-attacks image-classification
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

22

Forks

7

Language

Python

License

MIT

Last pushed

Jul 12, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/SewoongLab/spectre-defense"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.