Alibaba-NLP/MultilangStructureKD
[ACL 2020] Structure-Level Knowledge Distillation For Multilingual Sequence Labeling
This project helps build smaller, more efficient natural language processing (NLP) models that can work across multiple languages. You input existing high-performing, single-language NLP models and get out a unified multilingual model that performs well while being smaller and faster. This is for NLP engineers or researchers who need to deploy performant NLP solutions for global audiences or across diverse linguistic data.
No commits in the last 6 months.
Use this if you need to create a single, efficient NLP model that understands and processes text in several different languages without having to train a separate model for each.
Not ideal if you only work with a single language or if your primary need is for state-of-the-art monolingual performance rather than multilingual efficiency.
Stars
72
Forks
9
Language
Python
License
—
Category
Last pushed
Nov 23, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Alibaba-NLP/MultilangStructureKD"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
airaria/TextBrewer
A PyTorch-based knowledge distillation toolkit for natural language processing
sunyilgdx/NSP-BERT
The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original...
princeton-nlp/CoFiPruning
[ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408
kssteven418/LTP
[KDD'22] Learned Token Pruning for Transformers
georgian-io/Transformers-Domain-Adaptation
:no_entry: [DEPRECATED] Adapt Transformer-based language models to new text domains