nazmul-karim170/FIP

[CCS'24] Official Implementation of "Fisher Information guided Purification against Backdoor Attacks"

37
/ 100
Emerging

This project helps machine learning engineers and AI security researchers to identify and remove 'backdoor' vulnerabilities from their trained AI models. It takes a suspicious trained model, analyzes its internal workings using Fisher Information, and then 'purifies' it to remove malicious behaviors, producing a safer, more reliable model. This is especially useful for those deploying AI in sensitive applications like image classification, action recognition, or natural language processing.

Use this if you need to clean a trained AI model that might have been compromised by a backdoor attack, ensuring it performs reliably without hidden malicious functions.

Not ideal if you are looking to prevent backdoor attacks during the initial training phase or need to detect if a model *could* be backdoored before it's trained.

AI-security model-purification backdoor-detection machine-learning-security adversarial-robustness
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

14

Forks

2

Language

Python

License

MIT

Last pushed

Oct 29, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/nazmul-karim170/FIP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.