vzhong/embeddings
Fast, DB Backed pretrained word embeddings for natural language processing.
This package helps natural language processing and machine learning developers efficiently access pretrained word embeddings. Instead of loading massive files into memory, it uses a database backend. You provide words or phrases, and it returns their numerical representations, speeding up development workflows for text-based applications.
224 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Use this if you are a developer building NLP or machine learning applications and need fast, database-backed access to large pretrained word embeddings.
Not ideal if you are not a Python developer or prefer to manage embedding files directly without a database layer.
Stars
224
Forks
30
Language
Python
License
MIT
Category
Last pushed
Apr 02, 2025
Commits (30d)
0
Dependencies
3
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/vzhong/embeddings"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
dselivanov/text2vec
Fast vectorization, topic modeling, distances and GloVe word embeddings in R.
dccuchile/spanish-word-embeddings
Spanish word embeddings computed with different methods and from different corpora
ncbi-nlp/BioSentVec
BioWordVec & BioSentVec: pre-trained embeddings for biomedical words and sentences
avidale/compress-fasttext
Tools for shrinking fastText models (in gensim format)
ibrahimsharaf/doc2vec
:notebook: Long(er) text representation and classification using Doc2Vec embeddings