wln20/Attention-Viewer

A plug-and-play tool for visualizing attention-score heatmap in generative LLMs. Easy to customize for your own need.

26
/ 100
Experimental

This tool helps machine learning researchers and engineers understand how large language models (LLMs) process text. You input an LLM, its tokenizer, and a text prompt, and it outputs visual heatmaps showing which parts of the input text the model "pays attention" to at different stages. This allows you to gain insights into the model's internal reasoning and identify potential biases or unexpected behaviors.

No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer working with generative LLMs and need to visually inspect their internal attention mechanisms for interpretability or debugging.

Not ideal if you are a casual LLM user looking for a simple application or do not have experience working with LLM codebases and model architectures.

LLM interpretability NLP research model debugging attention visualization deep learning engineering
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

51

Forks

5

Language

Python

License

Last pushed

May 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/wln20/Attention-Viewer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.