WorksApplications/chiVe
Japanese word embedding with Sudachi and NWJC 🌿
This project provides pre-trained Japanese word embeddings that help understand the meaning and relationships between Japanese words. It takes large collections of Japanese text as input and produces numerical representations (vectors) for individual words. Anyone working with Japanese text analysis, such as a researcher, data scientist, or linguist, can use these embeddings to improve tasks like search, recommendation systems, or sentiment analysis.
171 stars. No commits in the last 6 months.
Use this if you need to analyze or process Japanese text and want to leverage the semantic meaning of words without training your own embeddings from scratch.
Not ideal if your primary focus is on languages other than Japanese, or if you require embeddings for very specialized, niche vocabularies not typically found in web corpora.
Stars
171
Forks
6
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 01, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/WorksApplications/chiVe"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
dselivanov/text2vec
Fast vectorization, topic modeling, distances and GloVe word embeddings in R.
vzhong/embeddings
Fast, DB Backed pretrained word embeddings for natural language processing.
dccuchile/spanish-word-embeddings
Spanish word embeddings computed with different methods and from different corpora
ncbi-nlp/BioSentVec
BioWordVec & BioSentVec: pre-trained embeddings for biomedical words and sentences
ibrahimsharaf/doc2vec
:notebook: Long(er) text representation and classification using Doc2Vec embeddings