AI-secure/VeriGauge
A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]
VeriGauge helps security researchers and machine learning engineers evaluate how robust their Deep Neural Networks (DNNs) are against adversarial attacks. It takes a trained image classification model and an input image, then outputs a certified 'radius' around that image within which no adversarial examples can trick the model. This is especially useful for those working with critical AI applications where model reliability is paramount.
No commits in the last 6 months.
Use this if you need to formally certify the robustness of your feed-forward neural networks with ReLU activations against small, imperceptible changes in input data for image classification tasks.
Not ideal if your neural network architectures are not primarily feed-forward with ReLU activations or if you are not working with image classification on datasets like MNIST, CIFAR-10, or ImageNet.
Stars
90
Forks
7
Language
C
License
—
Category
Last pushed
Mar 24, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AI-secure/VeriGauge"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
namkoong-lab/dro
A package of distributionally robust optimization (DRO) methods. Implemented via cvxpy and PyTorch
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
neu-autonomy/nfl_veripy
Formal Verification of Neural Feedback Loops (NFLs)
THUDM/grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for...
ADA-research/VERONA
A lightweight Python package for setting up robustness experiments and to compute robustness...