MarcoParola/CIProVA-framework

Human-centered XAI via a Concept-Informed Prompt-based Validation framework for saliency maps [CIProVa]

30
/ 100
Emerging

This framework helps AI/ML researchers and practitioners evaluate how well explainable AI methods (XAI) align with human understanding when interpreting image classification models. You input an image classification model's predictions and various saliency maps, and it outputs a benchmark of how different XAI techniques correspond to human-defined concepts within those images. This is for professionals building or deploying AI systems who need to ensure their models' explanations are intuitively comprehensible to humans.

No commits in the last 6 months.

Use this if you are developing or evaluating deep learning models for image classification and need to rigorously assess the quality and human-centricity of their post-hoc explanations.

Not ideal if you are looking for a tool to generate saliency maps from scratch or to improve the accuracy of your image classification model itself.

AI explanation model interpretability image classification human-AI interaction machine learning evaluation
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Python

License

CC0-1.0

Last pushed

Aug 11, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/MarcoParola/CIProVA-framework"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.