um-dsp/Morphence
Morphence: An implementation of a moving target defense against adversarial example attacks demonstrated for image classification models trained on MNIST and CIFAR-10.
This project helps machine learning researchers and security professionals make image classification models more robust against malicious attacks. You can input image datasets like MNIST or CIFAR-10 and use it to generate a protected model that is harder to trick with 'adversarial examples.' The output is a more secure, evaluated classification model, giving you insights into its defense capabilities.
No commits in the last 6 months.
Use this if you are developing or deploying image classification models and need to assess or improve their resilience against sophisticated adversarial attacks.
Not ideal if you are looking for a plug-and-play solution for general machine learning tasks or do not have a strong understanding of adversarial machine learning concepts.
Stars
23
Forks
5
Language
Python
License
MIT
Category
Last pushed
Aug 09, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/um-dsp/Morphence"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research