TaoShuchang/G-NIA

G-NIA model from "Single Node Injection Attack against Graph Neural Networks" (CIKM 2021)

19
/ 100
Experimental

This project helps researchers and security analysts understand and test vulnerabilities in graph-based machine learning models. It takes a trained Graph Neural Network (GNN) and simulates a highly constrained attack scenario where only a single malicious node is added to the network. The output demonstrates how this single addition can significantly degrade the GNN's performance. It is designed for those studying adversarial attacks on machine learning systems.

No commits in the last 6 months.

Use this if you need to evaluate the robustness of Graph Neural Networks against stealthy, low-cost single-node injection attacks.

Not ideal if you are looking for a general-purpose adversarial attack toolkit or methods involving multiple node injections.

cybersecurity machine-learning-security adversarial-ai graph-analytics network-security
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 4 / 25

How are scores calculated?

Stars

29

Forks

1

Language

Python

License

Last pushed

Jan 11, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/TaoShuchang/G-NIA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.