mims-harvard/GraphXAI

GraphXAI: Resource to support the development and evaluation of GNN explainers

46
/ 100
Emerging

When working with Graph Neural Networks (GNNs), it's critical to understand why a GNN makes a particular prediction. This project helps researchers and developers who are building or using GNNs to rigorously test and compare different methods for explaining GNN decisions. It takes in various GNN explanation methods and novel, pre-made graph datasets, and outputs assessments of how well those explanation methods actually work.

206 stars. No commits in the last 6 months.

Use this if you are a researcher or machine learning engineer developing or evaluating explainability techniques for Graph Neural Networks and need reliable benchmarks.

Not ideal if you are a domain expert simply looking for an explanation of a GNN's output without wanting to build or evaluate new explanation methods.

graph-neural-networks explainable-ai model-evaluation machine-learning-research ai-auditing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

206

Forks

36

Language

Python

License

MIT

Last pushed

May 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/mims-harvard/GraphXAI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.