EdisonLeeeee/Graph-Adversarial-Learning

A curated collection of adversarial attack and defense on graph data.

46
/ 100
Emerging

This collection helps machine learning practitioners understand how to secure and evaluate the robustness of their Graph Neural Networks (GNNs). It gathers research papers on methods to intentionally mislead (attack) or fortify (defend) GNNs, organized by year and type. If you are developing or deploying GNNs for tasks like fraud detection, recommendation systems, or social network analysis, this resource helps you identify potential vulnerabilities and countermeasures.

580 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or data scientist working with Graph Neural Networks and need to research methods for adversarial attacks or defenses to ensure the reliability and security of your models.

Not ideal if you are looking for an out-of-the-box tool or library to directly implement graph adversarial attacks or defenses without diving into academic research.

graph-machine-learning model-security adversarial-robustness fraud-detection-ML recommender-systems-ML
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

580

Forks

77

Language

Python

License

GPL-3.0

Last pushed

Nov 07, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/EdisonLeeeee/Graph-Adversarial-Learning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.