yxuansu/TaCL

[NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning

26
/ 100
Experimental

TaCL helps improve the core understanding ability of large language models like BERT, which are used for tasks such as answering questions, summarizing text, or categorizing documents. It takes existing text data as input and produces an enhanced version of the BERT model that can better distinguish between the meanings of individual words. This is useful for AI researchers, natural language processing engineers, and data scientists working on advanced language-based AI applications.

No commits in the last 6 months.

Use this if you are pre-training or fine-tuning BERT for critical language understanding tasks and need more accurate and discriminative word representations.

Not ideal if you are looking for an out-of-the-box solution for a specific end-user application without needing to work with foundational language models directly.

natural-language-processing machine-learning-research AI-model-training text-analytics language-model-development
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 9 / 25

How are scores calculated?

Stars

94

Forks

6

Language

Python

License

Last pushed

Jun 08, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/yxuansu/TaCL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.