pramodkaushik/acl18_results

Code to reproduce results in our ACL 2018 paper "Did the Model Understand the Question?"

31
/ 100
Emerging

This project helps researchers and developers working with Question Answering (QA) models understand the robustness and explainability of their systems. It takes a trained QA model and generates adversarial examples, helping you determine if the model truly understands the question or is relying on superficial patterns. The output reveals vulnerabilities in the model's reasoning, valuable for AI safety and interpretability specialists.

No commits in the last 6 months.

Use this if you are developing or evaluating Question Answering models and need to rigorously test their understanding and identify weaknesses.

Not ideal if you are looking for a general-purpose tool to improve the accuracy of your QA model, as this focuses on evaluating its interpretability and vulnerabilities.

AI Safety NLP Research Model Explainability Question Answering Adversarial Robustness
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

33

Forks

7

Language

License

Last pushed

Jul 17, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/pramodkaushik/acl18_results"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.