AlaFalaki/AttentionVisualizer

A simple library to showcase highest scored words using RoBERTa model

29
/ 100
Experimental

This tool helps researchers and NLP engineers understand which words in a sentence a RoBERTa language model focuses on. You input a text, and it visually highlights the words the model considers most important for its understanding, offering insights into its internal workings. It's designed for those evaluating or debugging transformer-based models.

No commits in the last 6 months.

Use this if you need to visually interpret the 'attention' given to different words by a RoBERTa model to better understand its predictions or identify biases.

Not ideal if you are not working with RoBERTa models or need to analyze other types of deep learning models.

natural-language-processing model-interpretability transformer-models AI-explainability
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

18

Forks

5

Language

Python

License

Last pushed

Sep 23, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/AlaFalaki/AttentionVisualizer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.