Trustworthy-ML-Lab/Linear-Explanations

[ICML 24] A novel automated neuron explanation framework that can accurately describe poly-semantic concepts in deep neural networks

15
/ 100
Experimental

This tool helps AI researchers and practitioners understand what specific parts (neurons) of a deep learning model are 'thinking' when it processes images. It takes a trained vision model and a dataset, then provides clear, human-understandable descriptions of the concepts each neuron has learned, like 'outdoor scenes' or 'animal faces'. This insight is crucial for debugging, improving, and trusting complex AI systems.

No commits in the last 6 months.

Use this if you need to precisely understand what visual features or concepts individual neurons in your deep learning image models are responding to.

Not ideal if you are looking for explanations for non-vision models, general model interpretability, or explanations of overall model decisions rather than individual neuron functions.

AI-explainability deep-learning-interpretability computer-vision neural-network-analysis AI-safety
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

14

Forks

Language

Jupyter Notebook

License

Last pushed

May 02, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Trustworthy-ML-Lab/Linear-Explanations"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.