pramodkaushik/acl18_results
Code to reproduce results in our ACL 2018 paper "Did the Model Understand the Question?"
This project helps researchers and developers working with Question Answering (QA) models understand the robustness and explainability of their systems. It takes a trained QA model and generates adversarial examples, helping you determine if the model truly understands the question or is relying on superficial patterns. The output reveals vulnerabilities in the model's reasoning, valuable for AI safety and interpretability specialists.
No commits in the last 6 months.
Use this if you are developing or evaluating Question Answering models and need to rigorously test their understanding and identify weaknesses.
Not ideal if you are looking for a general-purpose tool to improve the accuracy of your QA model, as this focuses on evaluating its interpretability and vulnerabilities.
Stars
33
Forks
7
Language
—
License
—
Category
Last pushed
Jul 17, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/pramodkaushik/acl18_results"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
thunlp/OpenAttack
An Open-Source Package for Textual Adversarial Attack.
thunlp/TAADpapers
Must-read Papers on Textual Adversarial Attack and Defense
jind11/TextFooler
A Model for Natural Language Attack on Text Classification and Inference
thunlp/OpenBackdoor
An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)
thunlp/SememePSO-Attack
Code and data of the ACL 2020 paper "Word-level Textual Adversarial Attacking as Combinatorial...