hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
This project helps AI researchers and practitioners understand what parts of an image and text a Transformer-based AI model is focusing on. You input an image and a question or statement, and it outputs visual highlights over the image, showing which regions were most important for the AI's decision. This is for AI developers, researchers, and data scientists working with multimodal AI models.
903 stars. No commits in the last 6 months.
Use this if you need to interpret why a Transformer-based AI model made a specific prediction when given both image and text inputs.
Not ideal if you are working with traditional machine learning models or need explainability for purely textual or purely visual AI systems.
Stars
903
Forks
115
Language
Jupyter Notebook
License
MIT
Last pushed
Aug 24, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/hila-chefer/Transformer-MM-Explainability"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jessevig/bertviz
BertViz: Visualize Attention in Transformer Models
inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
EleutherAI/knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model...
taufeeque9/codebook-features
Sparse and discrete interpretability tool for neural networks