MinishLab/model2vec
Fast State-of-the-Art Static Embeddings
Model2Vec helps machine learning practitioners convert large, slow language models into smaller, much faster versions for various text tasks. It takes an existing 'sentence transformer' model or raw text input and produces numerical representations (embeddings). Data scientists, NLP engineers, and AI developers can then use these compact embeddings for applications like text classification, information retrieval, or building RAG systems.
2,008 stars. Used by 7 other packages. Actively maintained with 7 commits in the last 30 days. Available on PyPI.
Use this if you need to deploy text understanding models efficiently, especially in resource-constrained environments or when speed is critical, while maintaining high performance.
Not ideal if you require the absolute highest accuracy from the largest, most complex language models, and are not concerned with model size or inference speed.
Stars
2,008
Forks
116
Language
Python
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
7
Dependencies
8
Reverse dependents
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/MinishLab/model2vec"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
AnswerDotAI/ModernBERT
Bringing BERT into modernity via both architecture changes and scaling
tensorflow/hub
A library for transfer learning by reusing parts of TensorFlow models.
Embedding/Chinese-Word-Vectors
100+ Chinese Word Vectors 上百种预训练中文词向量
twang2218/vocab-coverage
语言模型中文认知能力分析
Santosh-Gupta/SpeedTorch
Library for faster pinned CPU <-> GPU transfer in Pytorch