DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
This is a library for researchers and developers working with machine learning models that need to be resilient against adversarial attacks. It takes existing image and graph data and applies various attack methods to test model vulnerabilities, or defense strategies to strengthen models. The output helps machine learning engineers and AI security researchers build and evaluate more robust AI systems.
1,080 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are a machine learning engineer or AI security researcher needing to evaluate the robustness of your image or graph-based models against malicious attacks, or to implement defense mechanisms.
Not ideal if you are looking for a general machine learning library for model development rather than specific adversarial robustness testing.
Stars
1,080
Forks
191
Language
Python
License
MIT
Category
Last pushed
Jun 26, 2025
Commits (30d)
0
Dependencies
14
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/DSE-MSU/DeepRobust"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research
Hyperparticle/one-pixel-attack-keras
Keras implementation of "One pixel attack for fooling deep neural networks" using differential...