YiZeng623/I-BAU
Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''
This project helps machine learning engineers and researchers remove hidden vulnerabilities, known as backdoors, from their trained deep learning models. It takes a model that might have been compromised and a small set of clean, unpoisoned data, then outputs a 'cleaned' version of the model that no longer responds to backdoor triggers. This is essential for anyone deploying models in sensitive environments.
No commits in the last 6 months.
Use this if you need to quickly and effectively remove backdoors from a potentially compromised deep learning model, especially when you have very limited clean data available for the unlearning process.
Not ideal if you are developing new deep learning models from scratch and want to prevent backdoors during the initial training phase rather than removing them from an already trained model.
Stars
53
Forks
12
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Nov 16, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/YiZeng623/I-BAU"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model...
ebagdasa/backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct...
THUYimingLi/backdoor-learning-resources
A list of backdoor learning resources
zhangzp9970/MIA
Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence...
LukasStruppek/Plug-and-Play-Attacks
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and...