microsoft/denoised-smoothing
Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
This project helps machine learning engineers and researchers make their image classification models more reliable against subtle, malicious alterations to images. It takes an existing image classifier, including those from cloud providers like Azure, Google, AWS, or Clarifai, and applies 'denoised smoothing' to produce a more robust classifier. The output is a model that can confidently classify images even when they've been tampered with in ways that are hard for humans to detect.
102 stars. No commits in the last 6 months.
Use this if you need to ensure the trustworthiness and integrity of your image classification systems, especially in security-sensitive applications where adversaries might try to trick your models.
Not ideal if your primary concern is raw classification accuracy on clean data, or if you are working with data types other than images.
Stars
102
Forks
19
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Apr 02, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/microsoft/denoised-smoothing"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research