INK-USC/RobustLR

A Diagnostic Benchmark for Evaluating Logical Robustness of Deductive Reasoners

34
/ 100
Emerging

This tool helps AI researchers and developers assess how well their deductive reasoning models understand logical relationships in text. It takes a trained natural language model and evaluates its performance on carefully constructed diagnostic benchmarks. The output shows how robust the model is to variations in logical structures, revealing specific weaknesses in handling conjunctions, disjunctions, negations, and logical equivalences.

No commits in the last 6 months.

Use this if you are developing or evaluating natural language processing models that perform deductive reasoning and need to understand their logical robustness.

Not ideal if you are a non-developer seeking an off-the-shelf solution for text analysis or general-purpose natural language understanding.

AI model evaluation natural language understanding deductive reasoning logical AI machine learning research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

8

Forks

3

Language

Python

License

MIT

Last pushed

Nov 11, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/INK-USC/RobustLR"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.