microsoft/LoNLI
Testing Diverse Reasoning of NLI Systems
This project provides a comprehensive test suite to evaluate how well Natural Language Inference (NLI) systems understand different types of reasoning. You input a large set of test cases designed to probe 17 specific reasoning abilities, and it helps you analyze your NLI system's performance on each. Researchers and developers working on NLI models use this to thoroughly assess and improve their models' understanding of nuances in language.
No commits in the last 6 months.
Use this if you need to systematically test and analyze the diverse reasoning capabilities of your Natural Language Inference (NLI) models, going beyond simple accuracy metrics.
Not ideal if you are looking for a pre-trained NLI model or a general-purpose dataset for training your NLI system from scratch.
Stars
10
Forks
3
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Nov 28, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/microsoft/LoNLI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
coetaur0/ESIM
Implementation of the ESIM model for natural language inference with PyTorch
erickrf/multiffn-nli
Implementation of the multi feed-forward network architecture by Parikh et al. (2016) for...
vanzytay/EMNLP2018_NLI
Repository for NLI models (EMNLP 2018)
hsinyuan-huang/FusionNet-NLI
An example for applying FusionNet to Natural Language Inference
sdnr1/EBIM-NLI
Enhanced BiLSTM Inference Model for Natural Language Inference