Megum1/LOTUS

[CVPR'24] LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning

22
/ 100
Experimental

This project helps AI security researchers and red teamers understand and demonstrate a new type of backdoor attack on image classification models. It takes a dataset of images (like CIFAR-10) and a pre-trained model, then applies 'backdoor' triggers that cause the model to misclassify specific inputs to a target class. The output is a modified model that behaves normally on most inputs but exhibits the targeted misclassification when activated by the hidden trigger.

No commits in the last 6 months.

Use this if you are a security researcher or red teamer studying adversarial attacks and want to implement and test a state-of-the-art evasive backdoor technique on image classification models.

Not ideal if you are looking to defend against backdoor attacks or need a tool for general image classification tasks without exploring model vulnerabilities.

AI Security Adversarial Machine Learning Computer Vision Attacks Model Vulnerability Testing Red Teaming AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

15

Forks

Language

Python

License

MIT

Last pushed

Jan 15, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Megum1/LOTUS"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.