YiZeng623/I-BAU

Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''

42
/ 100
Emerging

This project helps machine learning engineers and researchers remove hidden vulnerabilities, known as backdoors, from their trained deep learning models. It takes a model that might have been compromised and a small set of clean, unpoisoned data, then outputs a 'cleaned' version of the model that no longer responds to backdoor triggers. This is essential for anyone deploying models in sensitive environments.

No commits in the last 6 months.

Use this if you need to quickly and effectively remove backdoors from a potentially compromised deep learning model, especially when you have very limited clean data available for the unlearning process.

Not ideal if you are developing new deep learning models from scratch and want to prevent backdoors during the initial training phase rather than removing them from an already trained model.

model-security machine-learning-auditing adversarial-robustness deep-learning-safety AI-trustworthiness
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

53

Forks

12

Language

Jupyter Notebook

License

MIT

Last pushed

Nov 16, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/YiZeng623/I-BAU"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.