federicoarenasl/Evaluating-w-Embeddings
In this paper we compare and evaluate two simple embedding models which can be constructed directly from a given co-occurrence matrix extracted from Twitter data; Positive Pointwise Mutual Information (PPMI), and Hellinger Principal Component Analysis (H-PCA). For each embedding model we consider three alternative metrics for word similarity: cosine, euclidean and manhattan distance.
No commits in the last 6 months.
Stars
4
Forks
1
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 21, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/federicoarenasl/Evaluating-w-Embeddings"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
CyberZHG/keras-pos-embd
Position embedding layers in Keras
PrashantRanjan09/WordEmbeddings-Elmo-Fasttext-Word2Vec
Using pre trained word embeddings (Fasttext, Word2Vec)
mb-14/embeddings.js
Word embeddings for the web
anishLearnsToCode/word-embeddings
Continuous Bag 💼 of Words Model to create Word embeddings for a word from a given Corpus 📚 and...
neuro-symbolic-ai/multi_relational_hyperbolic_word_embeddings
Multi-Relational Hyperbolic Word Embeddings from Natural Language Definitions