csdongxian/ANP_backdoor

Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"

34
/ 100
Emerging

This project helps machine learning engineers and researchers remove hidden, malicious 'backdoors' from deep learning models. It takes a pre-trained, potentially compromised model and a small set of clean, uninfected data, then outputs a 'purified' version of that model where the backdoor is removed or significantly weakened. This allows the model to perform its intended task accurately without being vulnerable to the backdoor trigger.

No commits in the last 6 months.

Use this if you need to neutralize a backdoor attack in a deep neural network, especially when you have limited clean data and computational resources.

Not ideal if you are looking to detect backdoors without attempting to remove them, or if you need a defense for traditional machine learning models rather than deep learning.

deep-learning-security model-purification adversarial-machine-learning neural-network-hardening cybersecurity-ml
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

63

Forks

15

Language

Python

License

Last pushed

May 08, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/csdongxian/ANP_backdoor"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.