EloiZ/embedding_evaluation
Evaluate your word embeddings
This tool helps researchers and natural language processing practitioners assess the quality of word embeddings. You provide your word embedding models, and it outputs scores indicating how well they capture semantic similarity, relatedness, and other linguistic properties. This is for anyone who develops or uses word embeddings and needs to quickly understand their performance.
No commits in the last 6 months.
Use this if you need to quickly and easily evaluate the intrinsic quality of different word embedding models against established linguistic benchmarks.
Not ideal if you need to evaluate how well your embeddings perform on a specific downstream task, as intrinsic evaluation scores don't always predict real-world usefulness.
Stars
35
Forks
11
Language
Python
License
—
Category
Last pushed
Dec 03, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/EloiZ/embedding_evaluation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
embeddings-benchmark/mteb
MTEB: Massive Text Embedding Benchmark
harmonydata/harmony
The Harmony Python library: a research tool for psychologists to harmonise data and...
yannvgn/laserembeddings
LASER multilingual sentence embeddings as a pip package
embeddings-benchmark/results
Data for the MTEB leaderboard
Hironsan/awesome-embedding-models
A curated list of awesome embedding models tutorials, projects and communities.