iamaaditya/pixel-deflection

Deflecting Adversarial Attacks with Pixel Deflection

36
/ 100
Emerging

This tool helps machine learning engineers and researchers protect image classification models from adversarial attacks. You input an image that might be trying to trick your model, and it outputs a 'cleaned' version of the image that helps your model make the correct classification. It's designed for those deploying or evaluating computer vision models in potentially insecure environments.

No commits in the last 6 months.

Use this if you need to improve the robustness and accuracy of your image classification models against deliberately manipulated 'adversarial' images.

Not ideal if you are dealing with non-image data or require defense against natural noise and not targeted adversarial attacks.

adversarial-robustness image-classification computer-vision machine-learning-security AI-safety
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 19 / 25

How are scores calculated?

Stars

72

Forks

21

Language

Jupyter Notebook

License

Last pushed

Jun 21, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/iamaaditya/pixel-deflection"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.