flyingdoog/PGExplainer

Parameterized Explainer for Graph Neural Network

33
/ 100
Emerging

This is a tool for machine learning engineers and data scientists working with Graph Neural Networks (GNNs). It helps you understand why your GNN made a specific prediction on a graph by identifying the most influential parts of that graph. You input your GNN model and graph data, and it outputs an explanation highlighting the key nodes and edges driving the prediction.

144 stars. No commits in the last 6 months.

Use this if you need to interpret the decisions of your Graph Neural Network, especially for tasks involving complex graph structures like social networks, molecular structures, or knowledge graphs.

Not ideal if you are working with non-graph-structured data or if you need explanations for traditional machine learning models.

graph-machine-learning model-interpretability explainable-AI graph-data-science
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

144

Forks

17

Language

Jupyter Notebook

License

Last pushed

Feb 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/flyingdoog/PGExplainer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.