ayaka14732/TrAVis
TrAVis: Visualise BERT attention in your browser
When working with Transformer-based language models like BERT, it can be hard to understand exactly how the model processes text. This tool helps you visualize the 'attention' patterns within these models by taking a text input and displaying interactive matrices. This allows machine learning researchers and practitioners to see which parts of the input text the model focuses on as it generates its output.
No commits in the last 6 months.
Use this if you need to understand the internal workings of Transformer attention mechanisms for research or model interpretation.
Not ideal if you are looking for a tool to train models or simply want to use pre-trained models without needing to inspect their attention.
Stars
58
Forks
4
Language
Python
License
—
Last pushed
Feb 03, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ayaka14732/TrAVis"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jessevig/bertviz
BertViz: Visualize Attention in Transformer Models
inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
EleutherAI/knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for...
cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model...