TaoShuchang/G-NIA
G-NIA model from "Single Node Injection Attack against Graph Neural Networks" (CIKM 2021)
This project helps researchers and security analysts understand and test vulnerabilities in graph-based machine learning models. It takes a trained Graph Neural Network (GNN) and simulates a highly constrained attack scenario where only a single malicious node is added to the network. The output demonstrates how this single addition can significantly degrade the GNN's performance. It is designed for those studying adversarial attacks on machine learning systems.
No commits in the last 6 months.
Use this if you need to evaluate the robustness of Graph Neural Networks against stealthy, low-cost single-node injection attacks.
Not ideal if you are looking for a general-purpose adversarial attack toolkit or methods involving multiple node injections.
Stars
29
Forks
1
Language
Python
License
—
Category
Last pushed
Jan 11, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/TaoShuchang/G-NIA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research