inseq-team/inseq

Interpretability for sequence generation models 🐛 🔍

60
/ 100
Established

This tool helps machine learning engineers and researchers understand why their sequence generation models produce specific outputs. You input a sequence generation model and a text, and it outputs visualizations showing which parts of the input text most influenced different parts of the generated output. This is crucial for debugging, improving, and building trust in models that generate text, like translation or summarization systems.

462 stars. Available on PyPI.

Use this if you need to explain the reasoning behind the output of your text generation models, rather than just observing their performance.

Not ideal if you are working with non-text data, traditional machine learning models, or only need basic performance metrics for your sequence models.

natural-language-processing machine-learning-interpretability text-generation model-debugging AI-explainability
Maintenance 10 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 15 / 25

How are scores calculated?

Stars

462

Forks

39

Language

Python

License

Apache-2.0

Last pushed

Mar 06, 2026

Commits (30d)

0

Dependencies

32

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/inseq-team/inseq"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.