milistu/bertdistiller
Faster, smaller BERT models in just a few lines of code.
This tool helps machine learning engineers and data scientists reduce the size and increase the speed of their large language models (like BERT) without significantly losing performance. You provide a large, pre-trained model and receive a smaller, faster version that's ready for deployment in applications like chatbots, search engines, or sentiment analysis tools. It's ideal for those who need to deploy performant NLP models efficiently on resource-constrained devices or at scale.
No commits in the last 6 months. Available on PyPI.
Use this if you need to make your BERT-based natural language processing models run faster and use less memory while keeping most of their accuracy.
Not ideal if you need to train a large language model from scratch or are looking for highly specialized, task-specific model fine-tuning techniques.
Stars
9
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 17, 2025
Commits (30d)
0
Dependencies
6
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/milistu/bertdistiller"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MinishLab/model2vec
Fast State-of-the-Art Static Embeddings
AnswerDotAI/ModernBERT
Bringing BERT into modernity via both architecture changes and scaling
tensorflow/hub
A library for transfer learning by reusing parts of TensorFlow models.
Embedding/Chinese-Word-Vectors
100+ Chinese Word Vectors 上百种预训练中文词向量
twang2218/vocab-coverage
语言模型中文认知能力分析