UKPLab/useb

Heterogenous, Task- and Domain-Specific Benchmark for Unsupervised Sentence Embeddings used in the TSDAE paper: https://arxiv.org/abs/2104.06979.

29
/ 100
Experimental

This tool helps researchers and practitioners evaluate how well different sentence embedding models understand and represent the meaning of text. You input a trained sentence embedding model and various text datasets, and it outputs performance scores on tasks like question-answering, scientific paper citation analysis, and identifying paraphrases in social media. This is for anyone who develops or applies sentence embedding models and needs to rigorously test their effectiveness across diverse real-world text.

No commits in the last 6 months.

Use this if you need to objectively compare the quality of different unsupervised sentence embedding models on various real-world text understanding tasks.

Not ideal if you are looking for a tool to train sentence embedding models from scratch, as this focuses solely on evaluation.

natural-language-processing information-retrieval text-similarity academic-search social-media-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

29

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Jan 04, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/UKPLab/useb"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.