PAIR-code/saliency

Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).

61
/ 100
Established

This tool helps machine learning engineers and researchers understand which parts of an input (like pixels in an image) are most important for a model's prediction. You provide your trained model and input data, and it outputs visual 'saliency maps' that highlight these crucial areas. This is useful for interpreting model behavior, debugging, and building trust in AI systems.

993 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.

Use this if you need to explain why your machine learning model made a specific prediction, especially for image or similar data types.

Not ideal if you're not a machine learning practitioner or if your primary need is model training rather than interpretation.

AI explainability model interpretation computer vision machine learning debugging deep learning analysis
Stale 6m
Maintenance 0 / 25
Adoption 11 / 25
Maturity 25 / 25
Community 25 / 25

How are scores calculated?

Stars

993

Forks

196

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Mar 20, 2024

Commits (30d)

0

Dependencies

2

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/PAIR-code/saliency"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.