ayaka14732/TrAVis

TrAVis: Visualise BERT attention in your browser

24
/ 100
Experimental

When working with Transformer-based language models like BERT, it can be hard to understand exactly how the model processes text. This tool helps you visualize the 'attention' patterns within these models by taking a text input and displaying interactive matrices. This allows machine learning researchers and practitioners to see which parts of the input text the model focuses on as it generates its output.

No commits in the last 6 months.

Use this if you need to understand the internal workings of Transformer attention mechanisms for research or model interpretation.

Not ideal if you are looking for a tool to train models or simply want to use pre-trained models without needing to inspect their attention.

Natural Language Processing Machine Learning Research Model Interpretation Deep Learning Visualization AI Explainability
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

58

Forks

4

Language

Python

License

Last pushed

Feb 03, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ayaka14732/TrAVis"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.