HanxunH/CognitiveDistillation

[ICLR2023] Distilling Cognitive Backdoor Patterns within an Image

37
/ 100
Emerging

This project helps identify 'backdoor patterns' hidden within images that could manipulate a pre-trained AI model. By inputting a trained image classification model and a batch of images, it outputs 'masks' that highlight these suspicious regions. This tool is for AI security researchers or model auditors concerned with detecting and analyzing poisoned data.

Use this if you need to detect hidden, malicious patterns in images that could cause your AI models to behave unexpectedly or incorrectly.

Not ideal if you are looking for a general image anomaly detection tool or a method to improve model accuracy.

AI Security Model Auditing Adversarial Machine Learning Data Poisoning Detection Computer Vision Security
No Package No Dependents
Maintenance 6 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

36

Forks

3

Language

Python

License

MIT

Last pushed

Oct 29, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/HanxunH/CognitiveDistillation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.