MinishLab/tokenlearn
Pre-train Static Word Embeddings
This tool helps machine learning engineers and researchers create custom static word embeddings for their natural language processing applications. You provide a large text dataset and a sentence transformer model, and it produces a highly efficient, pre-trained Model2Vec embedding model. This is ideal for those who need to embed words into numerical vectors to represent their meaning in a specific domain.
No commits in the last 6 months. Available on PyPI.
Use this if you need to train your own domain-specific static word embeddings efficiently using existing sentence transformer models.
Not ideal if you primarily work with dynamic embeddings or do not require custom pre-training beyond existing public models.
Stars
94
Forks
8
Language
Python
License
MIT
Category
Last pushed
Sep 09, 2025
Commits (30d)
0
Dependencies
5
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/MinishLab/tokenlearn"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MinishLab/model2vec
Fast State-of-the-Art Static Embeddings
AnswerDotAI/ModernBERT
Bringing BERT into modernity via both architecture changes and scaling
tensorflow/hub
A library for transfer learning by reusing parts of TensorFlow models.
Embedding/Chinese-Word-Vectors
100+ Chinese Word Vectors 上百种预训练中文词向量
twang2218/vocab-coverage
语言模型中文认知能力分析