sileod/pragmeval

Discourse Based Evaluation of Language Understanding

18
/ 100
Experimental

This project provides a collection of 11 English language datasets designed to evaluate how well Natural Language Understanding (NLU) models grasp the 'meaning as use' aspect of language, rather than just literal meaning. It takes in text snippets and outputs classifications related to discourse relations, speech acts, sarcasm, and more. This is for researchers or practitioners building and evaluating NLU models who need to assess their models' understanding of context and subtle human communication.

No commits in the last 6 months.

Use this if you are developing Natural Language Understanding models and need to rigorously evaluate their ability to interpret conversational nuances, sarcasm, and the implied meaning in human discourse.

Not ideal if you are looking for a pre-trained NLU model or a tool for general text analysis without a specific focus on evaluating pragmatic understanding.

natural-language-understanding conversational-ai discourse-analysis nlp-evaluation computational-linguistics
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 4 / 25

How are scores calculated?

Stars

21

Forks

1

Language

Jupyter Notebook

License

Last pushed

Jan 28, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/sileod/pragmeval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.