fanconic/this-does-not-look-like-that
Code for the experiments of the ICML 2021 Interpretability workshop paper "This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks"
This project helps machine learning researchers and practitioners understand the limitations of interpretable AI models that use 'prototypes' to explain their predictions. By providing code to replicate specific experiments, it demonstrates how these models can be easily fooled or fail to explain decisions accurately. Users can input image datasets and prototype-based model configurations to analyze how the model's 'reasoning' changes under various conditions.
No commits in the last 6 months.
Use this if you are a researcher or practitioner in explainable AI, particularly interested in the robustness and trustworthiness of prototype-based interpretability methods.
Not ideal if you are looking for a general-purpose tool to make your deep learning models interpretable or if you are not familiar with deep learning model training and evaluation.
Stars
20
Forks
5
Language
Jupyter Notebook
License
—
Last pushed
May 29, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/fanconic/this-does-not-look-like-that"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...