Trustworthy-ML-Lab/Linear-Explanations
[ICML 24] A novel automated neuron explanation framework that can accurately describe poly-semantic concepts in deep neural networks
This tool helps AI researchers and practitioners understand what specific parts (neurons) of a deep learning model are 'thinking' when it processes images. It takes a trained vision model and a dataset, then provides clear, human-understandable descriptions of the concepts each neuron has learned, like 'outdoor scenes' or 'animal faces'. This insight is crucial for debugging, improving, and trusting complex AI systems.
No commits in the last 6 months.
Use this if you need to precisely understand what visual features or concepts individual neurons in your deep learning image models are responding to.
Not ideal if you are looking for explanations for non-vision models, general model interpretability, or explanations of overall model decisions rather than individual neuron functions.
Stars
14
Forks
—
Language
Jupyter Notebook
License
—
Last pushed
May 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Trustworthy-ML-Lab/Linear-Explanations"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...