val-iisc/GD-UAP

Generalized Data-free Universal Adversarial Perturbations

35
/ 100
Emerging

This helps data scientists and machine learning engineers evaluate the robustness of their computer vision models. It takes an existing image classification, segmentation, or depth estimation model and generates subtle, universal perturbations that can trick the model. The output is a set of 'adversarial' images or patterns that can be applied to test how vulnerable a model is to minor, targeted changes.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher focused on computer vision and need to stress-test the security and reliability of your classification, segmentation, or depth estimation models against adversarial attacks.

Not ideal if you are looking to improve model accuracy or performance for standard, non-adversarial tasks, or if you are not working with computer vision models.

computer-vision model-robustness adversarial-testing image-analysis machine-learning-security
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

73

Forks

14

Language

Python

License

Last pushed

Oct 05, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/val-iisc/GD-UAP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.