PAIR-code/saliency
Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).
This tool helps machine learning engineers and researchers understand which parts of an input (like pixels in an image) are most important for a model's prediction. You provide your trained model and input data, and it outputs visual 'saliency maps' that highlight these crucial areas. This is useful for interpreting model behavior, debugging, and building trust in AI systems.
993 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Use this if you need to explain why your machine learning model made a specific prediction, especially for image or similar data types.
Not ideal if you're not a machine learning practitioner or if your primary need is model training rather than interpretation.
Stars
993
Forks
196
Language
Jupyter Notebook
License
Apache-2.0
Last pushed
Mar 20, 2024
Commits (30d)
0
Dependencies
2
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/PAIR-code/saliency"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...