aiim-research/GRETEL

GRETEL is a framework for the development and evaluation of Counterfactual Explanation methods for Graph Classifiers

47
/ 100
Emerging

This framework helps researchers quickly develop and test new methods for explaining decisions made by graph-based machine learning models. It takes various datasets and different explanation techniques as input, providing a standardized way to evaluate how well these explanations work. Researchers in machine learning who are focused on making complex graph models more understandable, especially in fields like health and finance, would use this.

Use this if you are a researcher designing and evaluating techniques to explain why a graph-based AI made a particular decision.

Not ideal if you are an end-user simply looking to understand a specific model's decision without developing new explanation methods.

Machine-Learning-Explainability Graph-Neural-Networks AI-Trustworthiness Model-Interpretation Algorithmic-Transparency
No Package No Dependents
Maintenance 6 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

23

Forks

22

Language

Jupyter Notebook

License

MIT

Last pushed

Jan 08, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/aiim-research/GRETEL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.