THUDM/grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
This project helps machine learning researchers evaluate how well their graph neural network models can withstand adversarial attacks. It takes various graph datasets and different attack/defense methods as input, then provides standardized robustness scores and reproducible leaderboards. Researchers who develop or use graph machine learning models for tasks like node classification or graph classification would find this useful for comparing model resilience.
No commits in the last 6 months. Available on PyPI.
Use this if you are a researcher focused on developing or evaluating the security and robustness of graph machine learning models against malicious data alterations.
Not ideal if you are a practitioner looking for a pre-built, robust graph model for a specific application without needing to benchmark its adversarial resilience.
Stars
99
Forks
18
Language
Python
License
MIT
Category
Last pushed
Nov 06, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/THUDM/grb"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
namkoong-lab/dro
A package of distributionally robust optimization (DRO) methods. Implemented via cvxpy and PyTorch
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
neu-autonomy/nfl_veripy
Formal Verification of Neural Feedback Loops (NFLs)
ADA-research/VERONA
A lightweight Python package for setting up robustness experiments and to compute robustness...
hendrycks/robustness
Corruption and Perturbation Robustness (ICLR 2019)