mims-harvard/GraphXAI
GraphXAI: Resource to support the development and evaluation of GNN explainers
When working with Graph Neural Networks (GNNs), it's critical to understand why a GNN makes a particular prediction. This project helps researchers and developers who are building or using GNNs to rigorously test and compare different methods for explaining GNN decisions. It takes in various GNN explanation methods and novel, pre-made graph datasets, and outputs assessments of how well those explanation methods actually work.
206 stars. No commits in the last 6 months.
Use this if you are a researcher or machine learning engineer developing or evaluating explainability techniques for Graph Neural Networks and need reliable benchmarks.
Not ideal if you are a domain expert simply looking for an explanation of a GNN's output without wanting to build or evaluate new explanation methods.
Stars
206
Forks
36
Language
Python
License
MIT
Category
Last pushed
May 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/mims-harvard/GraphXAI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
eliorc/node2vec
Implementation of the node2vec algorithm.
mims-harvard/decagon
Graph convolutional neural network for multirelational link prediction
mims-harvard/nimfa
Nimfa: Nonnegative matrix factorization in Python
ferencberes/online-node2vec
Node Embeddings in Dynamic Graphs
claws-lab/jodie
A PyTorch implementation of ACM SIGKDD 2019 paper "Predicting Dynamic Embedding Trajectory in...