DFKI-NLP/SMV

Code and data for the ACL 2023 NLReasoning Workshop paper "Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods" (Feldhus et al., 2023)

21
/ 100
Experimental

This tool helps you understand why a text-analyzing AI model made a specific decision. You provide a text input that the AI has analyzed and receive a clear, human-readable explanation highlighting the exact words or phrases that were most influential in the AI's prediction. This is useful for anyone who needs to interpret or trust the output of natural language processing models, such as data scientists, AI researchers, or product managers overseeing AI-driven text analysis.

No commits in the last 6 months.

Use this if you need to quickly and clearly understand which parts of a text input were most crucial for an AI model's decision, making its reasoning transparent.

Not ideal if you are looking to explain models that process non-textual data like images, audio, or tabular information.

natural-language-processing AI-explainability text-analysis model-interpretation NLP-auditing
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

Python

License

Last pushed

Jul 27, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DFKI-NLP/SMV"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.