AI-secure/VeriGauge

A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]

27
/ 100
Experimental

VeriGauge helps security researchers and machine learning engineers evaluate how robust their Deep Neural Networks (DNNs) are against adversarial attacks. It takes a trained image classification model and an input image, then outputs a certified 'radius' around that image within which no adversarial examples can trick the model. This is especially useful for those working with critical AI applications where model reliability is paramount.

No commits in the last 6 months.

Use this if you need to formally certify the robustness of your feed-forward neural networks with ReLU activations against small, imperceptible changes in input data for image classification tasks.

Not ideal if your neural network architectures are not primarily feed-forward with ReLU activations or if you are not working with image classification on datasets like MNIST, CIFAR-10, or ImageNet.

AI-security model-robustness adversarial-AI deep-learning-verification image-classification
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

90

Forks

7

Language

C

License

Last pushed

Mar 24, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AI-secure/VeriGauge"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.