MinishLab/tokenlearn

Pre-train Static Word Embeddings

47
/ 100
Emerging

This tool helps machine learning engineers and researchers create custom static word embeddings for their natural language processing applications. You provide a large text dataset and a sentence transformer model, and it produces a highly efficient, pre-trained Model2Vec embedding model. This is ideal for those who need to embed words into numerical vectors to represent their meaning in a specific domain.

No commits in the last 6 months. Available on PyPI.

Use this if you need to train your own domain-specific static word embeddings efficiently using existing sentence transformer models.

Not ideal if you primarily work with dynamic embeddings or do not require custom pre-training beyond existing public models.

Natural Language Processing Machine Learning Engineering Text Analytics AI Research Information Retrieval
Stale 6m
Maintenance 2 / 25
Adoption 9 / 25
Maturity 25 / 25
Community 11 / 25

How are scores calculated?

Stars

94

Forks

8

Language

Python

License

MIT

Last pushed

Sep 09, 2025

Commits (30d)

0

Dependencies

5

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/MinishLab/tokenlearn"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.