jessevig/bertviz
BertViz: Visualize Attention in Transformer Models
This tool helps researchers and practitioners understand how Transformer language models process text. You input text and a Transformer model, and it outputs interactive visualizations showing which parts of the input the model 'pays attention' to. It's designed for anyone working with advanced natural language models, such as NLP scientists or machine learning engineers, who needs to debug or interpret model behavior.
7,945 stars. Available on PyPI.
Use this if you need to visually explore the internal 'attention' mechanisms of Transformer models to understand why they make certain predictions or to debug their behavior.
Not ideal if you are looking for a tool to train models, evaluate general performance metrics, or visualize traditional machine learning models.
Stars
7,945
Forks
871
Language
Python
License
Apache-2.0
Last pushed
Jan 08, 2026
Commits (30d)
0
Dependencies
8
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jessevig/bertviz"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
EleutherAI/knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for...
cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model...
taufeeque9/codebook-features
Sparse and discrete interpretability tool for neural networks