gyunggyung/DistilKoBiLSTM

Distilling Task-Specific Knowledge from Teacher Model into BiLSTM

32
/ 100
Emerging

This project helps sentiment analysis practitioners analyze Korean text reviews quickly and efficiently. It takes large, powerful language models and distills their knowledge into much smaller, faster models. The output is a highly accurate sentiment classification model that runs significantly faster and uses far fewer computational resources, ideal for developers building sentiment analysis features in applications.

No commits in the last 6 months.

Use this if you need to perform binary sentiment classification on Korean text with high accuracy, but are constrained by computational resources, inference speed, or model size.

Not ideal if your primary goal is to train a brand-new, cutting-edge transformer model from scratch, or if you require fine-grained sentiment analysis beyond binary classification.

Korean NLP sentiment analysis text classification resource optimization machine learning deployment
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

31

Forks

3

Language

Python

License

Apache-2.0

Last pushed

Dec 14, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/gyunggyung/DistilKoBiLSTM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.