csdongxian/ANP_backdoor
Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"
This project helps machine learning engineers and researchers remove hidden, malicious 'backdoors' from deep learning models. It takes a pre-trained, potentially compromised model and a small set of clean, uninfected data, then outputs a 'purified' version of that model where the backdoor is removed or significantly weakened. This allows the model to perform its intended task accurately without being vulnerable to the backdoor trigger.
No commits in the last 6 months.
Use this if you need to neutralize a backdoor attack in a deep neural network, especially when you have limited clean data and computational resources.
Not ideal if you are looking to detect backdoors without attempting to remove them, or if you need a defense for traditional machine learning models rather than deep learning.
Stars
63
Forks
15
Language
Python
License
—
Category
Last pushed
May 08, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/csdongxian/ANP_backdoor"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model...
ebagdasa/backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct...
THUYimingLi/backdoor-learning-resources
A list of backdoor learning resources
zhangzp9970/MIA
Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence...
LukasStruppek/Plug-and-Play-Attacks
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and...