MarcoParola/CIProVA-framework
Human-centered XAI via a Concept-Informed Prompt-based Validation framework for saliency maps [CIProVa]
This framework helps AI/ML researchers and practitioners evaluate how well explainable AI methods (XAI) align with human understanding when interpreting image classification models. You input an image classification model's predictions and various saliency maps, and it outputs a benchmark of how different XAI techniques correspond to human-defined concepts within those images. This is for professionals building or deploying AI systems who need to ensure their models' explanations are intuitively comprehensible to humans.
No commits in the last 6 months.
Use this if you are developing or evaluating deep learning models for image classification and need to rigorously assess the quality and human-centricity of their post-hoc explanations.
Not ideal if you are looking for a tool to generate saliency maps from scratch or to improve the accuracy of your image classification model itself.
Stars
8
Forks
1
Language
Python
License
CC0-1.0
Last pushed
Aug 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/MarcoParola/CIProVA-framework"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...