INK-USC/RobustLR
A Diagnostic Benchmark for Evaluating Logical Robustness of Deductive Reasoners
This tool helps AI researchers and developers assess how well their deductive reasoning models understand logical relationships in text. It takes a trained natural language model and evaluates its performance on carefully constructed diagnostic benchmarks. The output shows how robust the model is to variations in logical structures, revealing specific weaknesses in handling conjunctions, disjunctions, negations, and logical equivalences.
No commits in the last 6 months.
Use this if you are developing or evaluating natural language processing models that perform deductive reasoning and need to understand their logical robustness.
Not ideal if you are a non-developer seeking an off-the-shelf solution for text analysis or general-purpose natural language understanding.
Stars
8
Forks
3
Language
Python
License
MIT
Category
Last pushed
Nov 11, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/INK-USC/RobustLR"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
thunlp/OpenAttack
An Open-Source Package for Textual Adversarial Attack.
thunlp/TAADpapers
Must-read Papers on Textual Adversarial Attack and Defense
jind11/TextFooler
A Model for Natural Language Attack on Text Classification and Inference
thunlp/OpenBackdoor
An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)
thunlp/SememePSO-Attack
Code and data of the ACL 2020 paper "Word-level Textual Adversarial Attacking as Combinatorial...