inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
This tool helps machine learning engineers and researchers understand why their sequence generation models produce specific outputs. You input a sequence generation model and a text, and it outputs visualizations showing which parts of the input text most influenced different parts of the generated output. This is crucial for debugging, improving, and building trust in models that generate text, like translation or summarization systems.
462 stars. Available on PyPI.
Use this if you need to explain the reasoning behind the output of your text generation models, rather than just observing their performance.
Not ideal if you are working with non-text data, traditional machine learning models, or only need basic performance metrics for your sequence models.
Stars
462
Forks
39
Language
Python
License
Apache-2.0
Last pushed
Mar 06, 2026
Commits (30d)
0
Dependencies
32
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/inseq-team/inseq"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
jessevig/bertviz
BertViz: Visualize Attention in Transformer Models
EleutherAI/knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for...
cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model...
taufeeque9/codebook-features
Sparse and discrete interpretability tool for neural networks