sunnynguyen-ai/llm-attention-visualizer

Interactive tool for analyzing attention patterns in transformer models with layer-wise visualizations, token importance scoring, and attention flow diagrams

40
/ 100
Emerging

This interactive tool helps you understand how large language models (LLMs) interpret text. You input text, and it shows you through various visualizations how the model's 'attention' mechanism focuses on different words and their relationships. It's designed for researchers, educators, or anyone needing to demystify how these AI models process information, without needing to write code.

No commits in the last 6 months.

Use this if you need to visually analyze and interpret the internal workings of transformer-based language models on specific text inputs.

Not ideal if you are looking for a tool to train models or perform model inference rather than understand their internal attention patterns.

LLM interpretability NLP research AI education Model debugging Text analysis
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 15 / 25
Community 18 / 25

How are scores calculated?

Stars

14

Forks

13

Language

Python

License

MIT

Last pushed

Sep 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sunnynguyen-ai/llm-attention-visualizer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.