fanconic/this-does-not-look-like-that

Code for the experiments of the ICML 2021 Interpretability workshop paper "This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks"

29
/ 100
Experimental

This project helps machine learning researchers and practitioners understand the limitations of interpretable AI models that use 'prototypes' to explain their predictions. By providing code to replicate specific experiments, it demonstrates how these models can be easily fooled or fail to explain decisions accurately. Users can input image datasets and prototype-based model configurations to analyze how the model's 'reasoning' changes under various conditions.

No commits in the last 6 months.

Use this if you are a researcher or practitioner in explainable AI, particularly interested in the robustness and trustworthiness of prototype-based interpretability methods.

Not ideal if you are looking for a general-purpose tool to make your deep learning models interpretable or if you are not familiar with deep learning model training and evaluation.

explainable-ai deep-learning-research model-interpretability image-classification adversarial-robustness
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

20

Forks

5

Language

Jupyter Notebook

License

Last pushed

May 29, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/fanconic/this-does-not-look-like-that"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.