cambridgeltl/MirrorWiC

[CoNLL'21] MirrorWiC: On Eliciting Word-in-Context Representationsfrom Pretrained Language Models

36
/ 100
Emerging

This project helps natural language processing researchers and practitioners improve how language models understand the meaning of words based on their surrounding context. It takes raw text, like Wikipedia articles, and uses it to fine-tune existing language models. The result is a more nuanced representation of words in different contexts, which can then be used in various downstream NLP applications.

No commits in the last 6 months.

Use this if you need to enhance the ability of pretrained language models to distinguish between different meanings of a word based on its context, without requiring extensive human-annotated data.

Not ideal if your primary goal is general-purpose language model fine-tuning for tasks where word-in-context ambiguity is not a critical factor.

natural-language-processing computational-linguistics word-sense-disambiguation semantic-analysis machine-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

12

Forks

5

Language

Python

License

MIT

Last pushed

Oct 31, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/cambridgeltl/MirrorWiC"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.