yxuansu/TaCL
[NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning
TaCL helps improve the core understanding ability of large language models like BERT, which are used for tasks such as answering questions, summarizing text, or categorizing documents. It takes existing text data as input and produces an enhanced version of the BERT model that can better distinguish between the meanings of individual words. This is useful for AI researchers, natural language processing engineers, and data scientists working on advanced language-based AI applications.
No commits in the last 6 months.
Use this if you are pre-training or fine-tuning BERT for critical language understanding tasks and need more accurate and discriminative word representations.
Not ideal if you are looking for an out-of-the-box solution for a specific end-user application without needing to work with foundational language models directly.
Stars
94
Forks
6
Language
Python
License
—
Category
Last pushed
Jun 08, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/yxuansu/TaCL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
n-waves/multifit
The code to reproduce results from paper "MultiFiT: Efficient Multi-lingual Language Model...
princeton-nlp/SimCSE
[EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821
yxuansu/SimCTG
[NeurIPS'22 Spotlight] A Contrastive Framework for Neural Text Generation
alibaba-edu/simple-effective-text-matching
Source code of the ACL2019 paper "Simple and Effective Text Matching with Richer Alignment Features".
Shark-NLP/OpenICL
OpenICL is an open-source framework to facilitate research, development, and prototyping of...