INK-USC/sparse-distillation

Code for "Sparse Distillation: Speeding Up Text Classification by Using Bigger Student Models"

35
/ 100
Emerging

This project helps data scientists and machine learning engineers speed up text classification tasks. It takes large, pre-trained language models and unlabeled text data to produce a smaller, faster model that performs text classification with high accuracy. This is ideal for teams needing to deploy text classifiers efficiently without sacrificing performance.

No commits in the last 6 months.

Use this if you need to classify text data quickly and accurately, and have access to both labeled examples and a large corpus of unlabeled text.

Not ideal if you don't have a pre-trained RoBERTa model or a substantial amount of unlabeled text data to leverage for the distillation process.

text-classification natural-language-processing sentiment-analysis machine-learning-operations model-optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

12

Forks

3

Language

Python

License

MIT

Last pushed

May 11, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/INK-USC/sparse-distillation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.