flyingdoog/PGExplainer
Parameterized Explainer for Graph Neural Network
This is a tool for machine learning engineers and data scientists working with Graph Neural Networks (GNNs). It helps you understand why your GNN made a specific prediction on a graph by identifying the most influential parts of that graph. You input your GNN model and graph data, and it outputs an explanation highlighting the key nodes and edges driving the prediction.
144 stars. No commits in the last 6 months.
Use this if you need to interpret the decisions of your Graph Neural Network, especially for tasks involving complex graph structures like social networks, molecular structures, or knowledge graphs.
Not ideal if you are working with non-graph-structured data or if you need explanations for traditional machine learning models.
Stars
144
Forks
17
Language
Jupyter Notebook
License
—
Category
Last pushed
Feb 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/flyingdoog/PGExplainer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
pyg-team/pytorch_geometric
Graph Neural Network Library for PyTorch
a-r-j/graphein
Protein Graph Library
raamana/graynet
Subject-wise networks from structural MRI, both vertex- and voxel-wise features (thickness, GM...
pykale/pykale
Knowledge-Aware machine LEarning (KALE): accessible machine learning from multiple sources for...
dmlc/dgl
Python package built to ease deep learning on graph, on top of existing DL frameworks.