embeddings-benchmark/results
Data for the MTEB leaderboard
This project provides the underlying data for the MTEB leaderboard, which ranks different text embedding models. It takes in evaluation results from various models and outputs a structured dataset of their performance across different tasks. Anyone who needs to compare the effectiveness of different text embedding models for natural language processing tasks, such as machine learning engineers or NLP researchers, would use this.
Use this if you are developing or selecting text embedding models and need to access raw, standardized evaluation results to understand their real-world performance.
Not ideal if you are looking for a tool to train or fine-tune embedding models, as this project focuses solely on benchmarking results.
Stars
47
Forks
135
Language
Python
License
—
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/embeddings-benchmark/results"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Compare
Related tools
embeddings-benchmark/mteb
MTEB: Massive Text Embedding Benchmark
harmonydata/harmony
The Harmony Python library: a research tool for psychologists to harmonise data and...
yannvgn/laserembeddings
LASER multilingual sentence embeddings as a pip package
Hironsan/awesome-embedding-models
A curated list of awesome embedding models tutorials, projects and communities.
fresh-stack/freshstack
This repository helps you evaluate your models on the FreshStack benchmark!