gyunggyung/DistilKoBiLSTM
Distilling Task-Specific Knowledge from Teacher Model into BiLSTM
This project helps sentiment analysis practitioners analyze Korean text reviews quickly and efficiently. It takes large, powerful language models and distills their knowledge into much smaller, faster models. The output is a highly accurate sentiment classification model that runs significantly faster and uses far fewer computational resources, ideal for developers building sentiment analysis features in applications.
No commits in the last 6 months.
Use this if you need to perform binary sentiment classification on Korean text with high accuracy, but are constrained by computational resources, inference speed, or model size.
Not ideal if your primary goal is to train a brand-new, cutting-edge transformer model from scratch, or if you require fine-grained sentiment analysis beyond binary classification.
Stars
31
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 14, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/gyunggyung/DistilKoBiLSTM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
airaria/TextBrewer
A PyTorch-based knowledge distillation toolkit for natural language processing
sunyilgdx/NSP-BERT
The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original...
kssteven418/LTP
[KDD'22] Learned Token Pruning for Transformers
princeton-nlp/CoFiPruning
[ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408
georgian-io/Transformers-Domain-Adaptation
:no_entry: [DEPRECATED] Adapt Transformer-based language models to new text domains