sicara/tf-explain

Interpretability Methods for tf.keras models with Tensorflow 2.x

55
/ 100
Established

This tool helps machine learning engineers and researchers understand why their image recognition or other computer vision models make specific predictions. By providing your trained TensorFlow 2.x Keras model and an input image, it generates visual explanations, highlighting the most influential parts of the image that led to the model's decision. This allows practitioners to debug model behavior and build trust in their AI systems.

1,036 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.

Use this if you need to visualize and interpret the decisions of your TensorFlow 2.x deep learning models, particularly for image data, to understand which input features are most important for a given prediction.

Not ideal if you are working with non-image data (like text or tabular data) or if your models are not built using TensorFlow 2.x Keras.

deep-learning-explainability computer-vision model-debugging ai-interpretability image-recognition
Stale 6m
Maintenance 0 / 25
Adoption 11 / 25
Maturity 25 / 25
Community 19 / 25

How are scores calculated?

Stars

1,036

Forks

110

Language

Python

License

MIT

Last pushed

Jun 03, 2024

Commits (30d)

0

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/sicara/tf-explain"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.