um-dsp/Morphence

Morphence: An implementation of a moving target defense against adversarial example attacks demonstrated for image classification models trained on MNIST and CIFAR-10.

37
/ 100
Emerging

This project helps machine learning researchers and security professionals make image classification models more robust against malicious attacks. You can input image datasets like MNIST or CIFAR-10 and use it to generate a protected model that is harder to trick with 'adversarial examples.' The output is a more secure, evaluated classification model, giving you insights into its defense capabilities.

No commits in the last 6 months.

Use this if you are developing or deploying image classification models and need to assess or improve their resilience against sophisticated adversarial attacks.

Not ideal if you are looking for a plug-and-play solution for general machine learning tasks or do not have a strong understanding of adversarial machine learning concepts.

machine-learning-security image-classification adversarial-robustness computer-vision AI-safety
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

23

Forks

5

Language

Python

License

MIT

Last pushed

Aug 09, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/um-dsp/Morphence"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.