jessevig/bertviz

BertViz: Visualize Attention in Transformer Models

61
/ 100
Established

This tool helps researchers and practitioners understand how Transformer language models process text. You input text and a Transformer model, and it outputs interactive visualizations showing which parts of the input the model 'pays attention' to. It's designed for anyone working with advanced natural language models, such as NLP scientists or machine learning engineers, who needs to debug or interpret model behavior.

7,945 stars. Available on PyPI.

Use this if you need to visually explore the internal 'attention' mechanisms of Transformer models to understand why they make certain predictions or to debug their behavior.

Not ideal if you are looking for a tool to train models, evaluate general performance metrics, or visualize traditional machine learning models.

natural-language-processing machine-learning-interpretability transformer-models deep-learning-research
Maintenance 6 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 20 / 25

How are scores calculated?

Stars

7,945

Forks

871

Language

Python

License

Apache-2.0

Last pushed

Jan 08, 2026

Commits (30d)

0

Dependencies

8

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jessevig/bertviz"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.