hila-chefer/Transformer-Explainability

[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.

48
/ 100
Emerging

This project helps machine learning practitioners understand why their Transformer-based models make specific predictions. It takes a trained vision or natural language processing model and an input (like an image or text), then outputs visual or text-based explanations showing the key parts of the input that led to the model's classification. Data scientists, AI researchers, and ML engineers can use this to debug models or build trust in their AI systems.

1,981 stars. No commits in the last 6 months.

Use this if you need to interpret the decision-making process of a Transformer model for image classification, object detection, or sentiment analysis tasks.

Not ideal if you are working with non-Transformer models or traditional machine learning algorithms, as this method is specifically designed for Transformer architectures.

AI explainability computer vision natural language processing model debugging machine learning interpretation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

1,981

Forks

259

Language

Jupyter Notebook

License

MIT

Last pushed

Jan 24, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/hila-chefer/Transformer-Explainability"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.