Megum1/ODSCAN

[IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models

24
/ 100
Experimental

This project helps security researchers and machine learning engineers identify hidden malicious behaviors, known as backdoors, in object detection models. It takes a trained object detection model as input and analyzes it to determine if it has been tampered with to misclassify objects or make new, non-existent objects 'appear' when a specific trigger is present. The output indicates whether a backdoor is detected, along with visual evidence of the inverted triggers.

No commits in the last 6 months.

Use this if you need to audit an object detection model for potential backdoor attacks, such as those that could cause misclassifications or introduce fake objects under specific conditions.

Not ideal if you are looking to defend against other types of model vulnerabilities beyond object misclassification or object appearing backdoors in detection models.

AI security model auditing object detection machine learning security backdoor detection
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

21

Forks

Language

Python

License

MIT

Last pushed

Oct 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/Megum1/ODSCAN"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.