danaugrs/binary-word-embeddings
Generates binary word embeddings by analyzing Wikipedia
This tool helps researchers and analysts quickly understand how different concepts, brands, or entities relate to each other based on their appearances in Wikipedia articles. You input a list of words, and it generates a score indicating how semantically similar each word is to others in the list. This is useful for anyone needing to identify connections between terms, like market researchers, content strategists, or academic researchers.
No commits in the last 6 months.
Use this if you need a straightforward way to discover hidden semantic relationships between a specific list of words, without complex natural language processing models.
Not ideal if you require highly nuanced semantic understanding, depend on a very large vocabulary, or need to analyze relationships beyond simple co-occurrence.
Stars
10
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 11, 2017
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/danaugrs/binary-word-embeddings"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
shibing624/text2vec
text2vec, text to vector....
predict-idlab/pyRDF2Vec
đ Python Implementation and Extension of RDF2Vec
IntuitionEngineeringTeam/chars2vec
Character-based word embeddings model based on RNN for handling real world texts
IITH-Compilers/IR2Vec
Implementation of IR2Vec, LLVM IR Based Scalable Program Embeddings
ddangelov/Top2Vec
Top2Vec learns jointly embedded topic, document and word vectors.