SapienzaNLP/wsd-hard-benchmark

Data and code for "Nibbling at the Hard Core of Word Sense Disambiguation" (ACL 2022).

19
/ 100
Experimental

This project offers a collection of improved and challenging test sets for evaluating how well natural language processing systems can determine the correct meaning of words in context, a task known as Word Sense Disambiguation (WSD). You input sentences with words needing their sense clarified, and the system output is a score indicating how accurately word meanings are assigned. This is primarily for researchers and developers working on refining AI models for language understanding.

No commits in the last 6 months.

Use this if you are a researcher or developer who needs to rigorously test and identify weaknesses in state-of-the-art AI models designed for understanding word meanings in text, especially for difficult or less common senses.

Not ideal if you are looking for a pre-trained, ready-to-use tool to perform Word Sense Disambiguation on your own text, as this project focuses on evaluation benchmarks rather than a deployable system.

natural-language-processing word-sense-disambiguation model-evaluation computational-linguistics semantic-analysis
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 5 / 25

How are scores calculated?

Stars

15

Forks

1

Language

Python

License

Last pushed

Mar 25, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/SapienzaNLP/wsd-hard-benchmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.