prajjwal1/generalize_lm_nli

Code for the paper EMNLP 2021 workshop paper "Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics"

31
/ 100
Emerging

When developing Natural Language Inference (NLI) models, it's common for them to rely on simple shortcuts rather than truly understanding the text. This project offers tools and pre-trained models for researchers to test and improve how well their NLI models generalize. You can input text pairs with labels (entailment, contradiction, neutral) and apply various training strategies to output models that are more robust.

No commits in the last 6 months.

Use this if you are an NLP researcher or data scientist focused on developing and evaluating the generalization capabilities of NLI models, particularly in avoiding dataset-specific heuristics.

Not ideal if you are looking for a plug-and-play NLI solution for immediate production use without deep experimentation into model generalization.

Natural Language Inference NLP Model Evaluation AI Generalization Textual Entailment Machine Learning Research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

34

Forks

3

Language

Jupyter Notebook

License

GPL-3.0

Last pushed

Jan 15, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/prajjwal1/generalize_lm_nli"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.