qiangsiwei/bert_distill

BERT distillation(基于BERT的蒸馏实验 )

41
/ 100
Emerging

This project helps machine learning practitioners create smaller, faster text classification models without sacrificing too much accuracy. It takes a large, pre-trained BERT model and a dataset (like customer reviews), then transfers BERT's knowledge to a more lightweight model such as TextCNN or BiLSTM. The output is a smaller model that can classify text with good performance, suitable for deployment in environments with limited resources.

314 stars. No commits in the last 6 months.

Use this if you need to deploy text classification capabilities on devices or systems with computational constraints, where a full BERT model would be too slow or resource-intensive.

Not ideal if your primary goal is to achieve the absolute highest possible accuracy, or if you have ample computational resources and inference speed is not a critical concern.

text-classification sentiment-analysis natural-language-processing model-optimization resource-constrained-ai
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 23 / 25

How are scores calculated?

Stars

314

Forks

82

Language

Python

License

Last pushed

Jul 30, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/qiangsiwei/bert_distill"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.