kssteven418/LTP

[KDD'22] Learned Token Pruning for Transformers

44
/ 100
Emerging

This project helps machine learning engineers and researchers optimize transformer models for natural language processing tasks. It takes an existing pre-trained I-BERT model, fine-tuned for tasks like text classification (e.g., sentiment analysis, question answering), and processes it to reduce its computational size and improve efficiency. The output is a smaller, faster transformer model that maintains high performance.

No commits in the last 6 months.

Use this if you are a machine learning engineer working with transformer models and need to reduce their size and improve inference speed while preserving accuracy for deployment or resource-constrained environments.

Not ideal if you are a practitioner looking for a ready-to-use, pre-optimized model without performing further training or configuration steps.

natural-language-processing model-optimization transformer-models text-classification computational-efficiency
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

99

Forks

19

Language

Python

License

Apache-2.0

Last pushed

Feb 27, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/kssteven418/LTP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.