cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
This tool helps data scientists, machine learning engineers, and researchers understand why their AI models make specific predictions on text or image data. It takes an existing "transformer" model and a piece of data, then shows which parts of the input most strongly influenced the model's decision, making complex AI behavior more transparent. This is crucial for debugging models, building trust, and ensuring fairness.
1,413 stars. No commits in the last 6 months.
Use this if you need to explain the reasoning behind predictions made by a transformer-based AI model, whether for a single piece of text, a pair of texts, or an image.
Not ideal if you are working with non-transformer models, traditional machine learning algorithms, or tabular data, as it's specifically built for the Hugging Face Transformers ecosystem.
Stars
1,413
Forks
100
Language
Jupyter Notebook
License
Apache-2.0
Last pushed
Aug 30, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/cdpierse/transformers-interpret"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jessevig/bertviz
BertViz: Visualize Attention in Transformer Models
inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
EleutherAI/knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for...
taufeeque9/codebook-features
Sparse and discrete interpretability tool for neural networks