THUDM/grb

Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.

52
/ 100
Established

This project helps machine learning researchers evaluate how well their graph neural network models can withstand adversarial attacks. It takes various graph datasets and different attack/defense methods as input, then provides standardized robustness scores and reproducible leaderboards. Researchers who develop or use graph machine learning models for tasks like node classification or graph classification would find this useful for comparing model resilience.

No commits in the last 6 months. Available on PyPI.

Use this if you are a researcher focused on developing or evaluating the security and robustness of graph machine learning models against malicious data alterations.

Not ideal if you are a practitioner looking for a pre-built, robust graph model for a specific application without needing to benchmark its adversarial resilience.

graph-machine-learning adversarial-robustness model-evaluation graph-analytics security-auditing
Stale 6m No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 25 / 25
Community 18 / 25

How are scores calculated?

Stars

99

Forks

18

Language

Python

License

MIT

Last pushed

Nov 06, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/THUDM/grb"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.