HanxunH/Detect-CLIP-Backdoor-Samples

[ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining

34
/ 100
Emerging

This tool helps AI researchers and machine learning engineers working with large image-text datasets like those used to train CLIP models. It takes in a trained CLIP encoder and a batch of its training images, then identifies 'backdoor' samples that could compromise the model's reliability or inject vulnerabilities. The output is a score for each image, indicating its likelihood of being a backdoor sample.

No commits in the last 6 months.

Use this if you need to identify malicious or unintentionally problematic samples within the massive datasets used for training large-scale vision-language models like CLIP.

Not ideal if you are working with supervised learning models or datasets that are not web-scale image-text pairs.

AI Safety Model Auditing Data Curation Machine Learning Security Vision-Language Models
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

19

Forks

3

Language

Jupyter Notebook

License

MIT

Last pushed

Feb 26, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/HanxunH/Detect-CLIP-Backdoor-Samples"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.