val-iisc/GD-UAP
Generalized Data-free Universal Adversarial Perturbations
This helps data scientists and machine learning engineers evaluate the robustness of their computer vision models. It takes an existing image classification, segmentation, or depth estimation model and generates subtle, universal perturbations that can trick the model. The output is a set of 'adversarial' images or patterns that can be applied to test how vulnerable a model is to minor, targeted changes.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher focused on computer vision and need to stress-test the security and reliability of your classification, segmentation, or depth estimation models against adversarial attacks.
Not ideal if you are looking to improve model accuracy or performance for standard, non-adversarial tasks, or if you are not working with computer vision models.
Stars
73
Forks
14
Language
Python
License
—
Category
Last pushed
Oct 05, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/val-iisc/GD-UAP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
namkoong-lab/dro
A package of distributionally robust optimization (DRO) methods. Implemented via cvxpy and PyTorch
neu-autonomy/nfl_veripy
Formal Verification of Neural Feedback Loops (NFLs)
THUDM/grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for...
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
ADA-research/VERONA
A lightweight Python package for setting up robustness experiments and to compute robustness...