cambridgeltl/MirrorWiC
[CoNLL'21] MirrorWiC: On Eliciting Word-in-Context Representationsfrom Pretrained Language Models
This project helps natural language processing researchers and practitioners improve how language models understand the meaning of words based on their surrounding context. It takes raw text, like Wikipedia articles, and uses it to fine-tune existing language models. The result is a more nuanced representation of words in different contexts, which can then be used in various downstream NLP applications.
No commits in the last 6 months.
Use this if you need to enhance the ability of pretrained language models to distinguish between different meanings of a word based on its context, without requiring extensive human-annotated data.
Not ideal if your primary goal is general-purpose language model fine-tuning for tasks where word-in-context ambiguity is not a critical factor.
Stars
12
Forks
5
Language
Python
License
MIT
Category
Last pushed
Oct 31, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/cambridgeltl/MirrorWiC"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
airaria/TextBrewer
A PyTorch-based knowledge distillation toolkit for natural language processing
sunyilgdx/NSP-BERT
The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original...
princeton-nlp/CoFiPruning
[ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408
kssteven418/LTP
[KDD'22] Learned Token Pruning for Transformers
georgian-io/Transformers-Domain-Adaptation
:no_entry: [DEPRECATED] Adapt Transformer-based language models to new text domains