stavrostheocharis/easy_explain

An XAI library that helps to explain AI models in a really quick & easy way

40
/ 100
Emerging

This tool helps AI researchers and developers understand why their image-based AI models make specific predictions. You feed in an image and your trained AI model, and it outputs visual explanations like heatmaps showing which parts of the image were most important to the model's decision. It's designed for anyone working with computer vision models who needs to interpret their behavior.

No commits in the last 6 months. Available on PyPI.

Use this if you are a machine learning engineer or researcher developing image classification or object detection models and need to explain their internal decision-making process in a straightforward way.

Not ideal if you need to explain models that don't process images, such as those working with text, tabular data, or time series.

computer-vision AI-explainability image-classification object-detection machine-learning-research
Stale 6m
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 9 / 25

How are scores calculated?

Stars

17

Forks

2

Language

Python

License

MIT

Last pushed

Mar 08, 2024

Commits (30d)

0

Dependencies

11

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/stavrostheocharis/easy_explain"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.