milistu/bertdistiller

Faster, smaller BERT models in just a few lines of code.

32
/ 100
Emerging

This tool helps machine learning engineers and data scientists reduce the size and increase the speed of their large language models (like BERT) without significantly losing performance. You provide a large, pre-trained model and receive a smaller, faster version that's ready for deployment in applications like chatbots, search engines, or sentiment analysis tools. It's ideal for those who need to deploy performant NLP models efficiently on resource-constrained devices or at scale.

No commits in the last 6 months. Available on PyPI.

Use this if you need to make your BERT-based natural language processing models run faster and use less memory while keeping most of their accuracy.

Not ideal if you need to train a large language model from scratch or are looking for highly specialized, task-specific model fine-tuning techniques.

natural-language-processing machine-learning-operations model-deployment computational-efficiency
Stale 6m
Maintenance 2 / 25
Adoption 5 / 25
Maturity 25 / 25
Community 0 / 25

How are scores calculated?

Stars

9

Forks

Language

Python

License

Apache-2.0

Last pushed

Apr 17, 2025

Commits (30d)

0

Dependencies

6

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/milistu/bertdistiller"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.