Megum1/UNIT

[ECCV'24] UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening

34
/ 100
Emerging

This is a tool for machine learning engineers and researchers to evaluate and mitigate 'backdoor' attacks on neural networks. It takes a trained model that might be compromised and helps you understand how vulnerable it is to specific backdoor attacks like BadNets or WaNet. It then applies a defense mechanism to reduce the effectiveness of these attacks, outputting metrics like accuracy and attack success rate to show the model's improved robustness.

Use this if you are concerned about the security and trustworthiness of your deep learning models, particularly if they are trained on external or untrusted datasets and might have hidden malicious behaviors.

Not ideal if you are looking for general model interpretability, adversarial robustness against direct input perturbations, or methods to prevent data poisoning during initial model training.

AI-security deep-learning-robustness machine-learning-auditing neural-network-hardening model-security
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Python

License

MIT

Last pushed

Dec 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Megum1/UNIT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.