ikergarcia1996/MetaVec
A monolingual and cross-lingual meta-embedding generation and evaluation framework
This project offers pre-computed word embeddings that combine multiple existing models, giving you a single, comprehensive representation of words and their meanings. It provides improved "word smarts" for your language-based applications and workflows by taking various word embedding files and producing a unified, high-quality meta-embedding. Natural Language Processing (NLP) researchers and engineers can use these to enhance the performance of their text analysis, machine translation, or information retrieval systems.
No commits in the last 6 months.
Use this if you need state-of-the-art word embeddings to boost the accuracy of your NLP models in understanding text.
Not ideal if you're only working with a highly specialized vocabulary not commonly found in general language models, or if you need to generate embeddings for a language other than English.
Stars
79
Forks
6
Language
Python
License
GPL-3.0
Category
Last pushed
Apr 29, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/ikergarcia1996/MetaVec"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
embeddings-benchmark/mteb
MTEB: Massive Text Embedding Benchmark
harmonydata/harmony
The Harmony Python library: a research tool for psychologists to harmonise data and...
yannvgn/laserembeddings
LASER multilingual sentence embeddings as a pip package
embeddings-benchmark/results
Data for the MTEB leaderboard
Hironsan/awesome-embedding-models
A curated list of awesome embedding models tutorials, projects and communities.