EloiZ/embedding_evaluation

Evaluate your word embeddings

32
/ 100
Emerging

This tool helps researchers and natural language processing practitioners assess the quality of word embeddings. You provide your word embedding models, and it outputs scores indicating how well they capture semantic similarity, relatedness, and other linguistic properties. This is for anyone who develops or uses word embeddings and needs to quickly understand their performance.

No commits in the last 6 months.

Use this if you need to quickly and easily evaluate the intrinsic quality of different word embedding models against established linguistic benchmarks.

Not ideal if you need to evaluate how well your embeddings perform on a specific downstream task, as intrinsic evaluation scores don't always predict real-world usefulness.

natural-language-processing computational-linguistics semantic-modeling language-model-evaluation text-representation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 17 / 25

How are scores calculated?

Stars

35

Forks

11

Language

Python

License

Last pushed

Dec 03, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/EloiZ/embedding_evaluation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.