DFKI-NLP/SMV
Code and data for the ACL 2023 NLReasoning Workshop paper "Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods" (Feldhus et al., 2023)
This tool helps you understand why a text-analyzing AI model made a specific decision. You provide a text input that the AI has analyzed and receive a clear, human-readable explanation highlighting the exact words or phrases that were most influential in the AI's prediction. This is useful for anyone who needs to interpret or trust the output of natural language processing models, such as data scientists, AI researchers, or product managers overseeing AI-driven text analysis.
No commits in the last 6 months.
Use this if you need to quickly and clearly understand which parts of a text input were most crucial for an AI model's decision, making its reasoning transparent.
Not ideal if you are looking to explain models that process non-textual data like images, audio, or tabular information.
Stars
9
Forks
1
Language
Python
License
—
Last pushed
Jul 27, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DFKI-NLP/SMV"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jessevig/bertviz
BertViz: Visualize Attention in Transformer Models
inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
EleutherAI/knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for...
cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model...