iamaaditya/pixel-deflection
Deflecting Adversarial Attacks with Pixel Deflection
This tool helps machine learning engineers and researchers protect image classification models from adversarial attacks. You input an image that might be trying to trick your model, and it outputs a 'cleaned' version of the image that helps your model make the correct classification. It's designed for those deploying or evaluating computer vision models in potentially insecure environments.
No commits in the last 6 months.
Use this if you need to improve the robustness and accuracy of your image classification models against deliberately manipulated 'adversarial' images.
Not ideal if you are dealing with non-image data or require defense against natural noise and not targeted adversarial attacks.
Stars
72
Forks
21
Language
Jupyter Notebook
License
—
Category
Last pushed
Jun 21, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/iamaaditya/pixel-deflection"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research