QwQ2000/WSDM26-Graph-Unlearning-Inversion

WSDM'26 full paper -- "Unlearning Inversion Attack for Graph Neural Networks"

35
/ 100
Emerging

This project helps machine learning researchers and data scientists evaluate the security and privacy risks of Graph Neural Networks (GNNs). It takes a GNN model and a dataset, then demonstrates how an 'inversion attack' could reconstruct sensitive input data. It also shows how 'unlearning' mechanisms can mitigate these privacy breaches.

Use this if you are a researcher or practitioner working with Graph Neural Networks and need to understand or demonstrate their vulnerability to data reconstruction attacks and the effectiveness of unlearning methods.

Not ideal if you are looking for a general-purpose graph analysis tool or a library for building GNNs, as this focuses specifically on privacy and unlearning research.

graph-neural-networks data-privacy machine-unlearning cybersecurity-research model-auditing
No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 13 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/QwQ2000/WSDM26-Graph-Unlearning-Inversion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.