princeton-nlp/rationale-robustness

NAACL 2022: Can Rationalization Improve Robustness? https://arxiv.org/abs/2204.11790

19
/ 100
Experimental

This project helps researchers and practitioners evaluate and improve the reliability of natural language processing (NLP) models, especially when these models need to explain their decisions. It takes text data and an existing NLP model's predictions, then tests how well the model's explanations (rationales) prevent it from being tricked by irrelevant or 'attack' text. The output shows how robust the model's explanations are in different scenarios, which is useful for NLP researchers and AI ethics practitioners.

No commits in the last 6 months.

Use this if you are working with NLP models that provide explanations for their predictions and need to understand how resilient these explanations are to adversarial inputs or 'noisy' text.

Not ideal if you are looking for a general-purpose NLP model or if your primary concern is model accuracy rather than the robustness of its explanations.

NLP-model-evaluation explainable-AI AI-robustness text-analysis natural-language-understanding
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 4 / 25

How are scores calculated?

Stars

27

Forks

1

Language

Python

License

Last pushed

Nov 21, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/princeton-nlp/rationale-robustness"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.