SpringerNLP/Chapter5
Chapter 5: Embeddings
This project helps you understand the meaning and relationships between words in large text collections. It takes a corpus of text, like a collection of articles, and outputs numerical representations (embeddings) for each word. Data scientists, linguists, or researchers working with text analysis can then use these embeddings for various applications.
No commits in the last 6 months.
Use this if you need to transform large volumes of text into a numerical format suitable for machine learning or advanced linguistic analysis.
Not ideal if you're looking for a pre-built application or a simple search engine for text, as this focuses on generating word representations.
Stars
9
Forks
4
Language
Jupyter Notebook
License
—
Category
Last pushed
Jul 23, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/SpringerNLP/Chapter5"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
CyberZHG/keras-pos-embd
Position embedding layers in Keras
PrashantRanjan09/WordEmbeddings-Elmo-Fasttext-Word2Vec
Using pre trained word embeddings (Fasttext, Word2Vec)
mb-14/embeddings.js
Word embeddings for the web
anishLearnsToCode/word-embeddings
Continuous Bag 💼 of Words Model to create Word embeddings for a word from a given Corpus 📚 and...
neuro-symbolic-ai/multi_relational_hyperbolic_word_embeddings
Multi-Relational Hyperbolic Word Embeddings from Natural Language Definitions