poloclub/dodrio

Exploring attention weights in transformer-based models with linguistic knowledge.

42
/ 100
Emerging

This tool helps you visually analyze and compare how transformer-based language models process text. You input a pre-trained transformer model and sample text, and it outputs interactive visualizations showing the model's 'attention weights' alongside linguistic information like part-of-speech tags. It's designed for natural language processing (NLP) researchers and practitioners who want to understand why their models make certain predictions.

370 stars. No commits in the last 6 months.

Use this if you are an NLP researcher or practitioner who needs to interpret and debug the internal workings of transformer models.

Not ideal if you are looking for a tool to train new NLP models or if you need to perform large-scale, automated model evaluation without visual inspection.

natural-language-processing transformer-model-analysis linguistic-understanding model-interpretability nlp-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

370

Forks

36

Language

Svelte

License

MIT

Last pushed

Oct 03, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/poloclub/dodrio"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.