cuge1995/ICCV-2021-adversarial-attacks-and-defense

ICCV 2021 papers and code focus on adversarial attacks and defense

13
/ 100
Experimental

This resource curates research papers and associated code from ICCV 2021 focusing on adversarial attacks and defenses in computer vision. It provides insights into how machine learning models can be tricked and how to make them more robust. Researchers and security professionals working with image recognition, object detection, or other vision-based AI systems would use this to understand vulnerabilities and develop countermeasures.

No commits in the last 6 months.

Use this if you are a researcher or security professional investigating the resilience of AI models against malicious inputs or developing strategies to protect vision systems from adversarial manipulation.

Not ideal if you are looking for a plug-and-play tool for general computer vision tasks or a high-level overview of AI without technical depth.

AI-security computer-vision machine-learning-robustness adversarial-machine-learning image-recognition-defense
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

11

Forks

Language

License

Last pushed

Nov 05, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/cuge1995/ICCV-2021-adversarial-attacks-and-defense"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.