Megum1/UNIT
[ECCV'24] UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening
This is a tool for machine learning engineers and researchers to evaluate and mitigate 'backdoor' attacks on neural networks. It takes a trained model that might be compromised and helps you understand how vulnerable it is to specific backdoor attacks like BadNets or WaNet. It then applies a defense mechanism to reduce the effectiveness of these attacks, outputting metrics like accuracy and attack success rate to show the model's improved robustness.
Use this if you are concerned about the security and trustworthiness of your deep learning models, particularly if they are trained on external or untrusted datasets and might have hidden malicious behaviors.
Not ideal if you are looking for general model interpretability, adversarial robustness against direct input perturbations, or methods to prevent data poisoning during initial model training.
Stars
10
Forks
1
Language
Python
License
MIT
Category
Last pushed
Dec 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Megum1/UNIT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model...
ebagdasa/backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct...
THUYimingLi/backdoor-learning-resources
A list of backdoor learning resources
zhangzp9970/MIA
Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence...
LukasStruppek/Plug-and-Play-Attacks
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and...