DFKI-NLP/thermostat
Collection of NLP model explanations and accompanying analysis tools
This project provides pre-computed explanations for how various Natural Language Processing (NLP) models make their predictions on common text datasets. It takes a specific text dataset and an NLP model, and outputs 'attributions' for each word, showing its importance to the model's decision. This is ideal for NLP researchers, data scientists, or anyone who needs to understand why an NLP model classified text in a certain way, beyond just knowing the final prediction.
144 stars. No commits in the last 6 months.
Use this if you need to quickly access and visualize explanations for common NLP models and datasets, without having to re-run complex explainability algorithms yourself.
Not ideal if you need explanations for a highly customized NLP model or a proprietary dataset that is not included in the existing collection.
Stars
144
Forks
8
Language
Jsonnet
License
Apache-2.0
Last pushed
Jun 26, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DFKI-NLP/thermostat"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jessevig/bertviz
BertViz: Visualize Attention in Transformer Models
inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
EleutherAI/knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for...
cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model...