fredhohman/summit
🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
When working with deep learning models, it can be hard to understand why a model makes certain predictions. Summit helps by letting you visually explore what parts of the input data the model focuses on and how different features combine to reach a decision. This tool is for machine learning researchers and practitioners who need to interpret their deep learning models.
116 stars. No commits in the last 6 months.
Use this if you need to understand the internal workings and decision-making process of your deep learning models.
Not ideal if you are looking for a simple pass/fail metric for model performance rather than detailed interpretability.
Stars
116
Forks
16
Language
JavaScript
License
MIT
Last pushed
Jan 23, 2020
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/fredhohman/summit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...