shimo-lab/sembei
:rice_cracker: 単語分割を経由しない単語埋め込み :rice_cracker:
This tool helps Japanese natural language processing (NLP) practitioners create word embeddings directly from raw text, bypassing the need for traditional word segmentation. It takes Japanese text as input and produces numerical representations (embeddings) for words, which can then be used in various downstream NLP tasks. Data scientists, computational linguists, and researchers working with Japanese text data would find this useful.
No commits in the last 6 months.
Use this if you are working with Japanese text and want to generate word embeddings without the complexities or potential errors introduced by explicit word segmentation.
Not ideal if your primary focus is on languages other than Japanese, or if you prefer a word embedding method that explicitly relies on word segmentation.
Stars
14
Forks
4
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 19, 2017
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/shimo-lab/sembei"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
dselivanov/text2vec
Fast vectorization, topic modeling, distances and GloVe word embeddings in R.
vzhong/embeddings
Fast, DB Backed pretrained word embeddings for natural language processing.
dccuchile/spanish-word-embeddings
Spanish word embeddings computed with different methods and from different corpora
ncbi-nlp/BioSentVec
BioWordVec & BioSentVec: pre-trained embeddings for biomedical words and sentences
avidale/compress-fasttext
Tools for shrinking fastText models (in gensim format)