embeddings-benchmark/results

Data for the MTEB leaderboard

51
/ 100
Established

This project provides the underlying data for the MTEB leaderboard, which ranks different text embedding models. It takes in evaluation results from various models and outputs a structured dataset of their performance across different tasks. Anyone who needs to compare the effectiveness of different text embedding models for natural language processing tasks, such as machine learning engineers or NLP researchers, would use this.

Use this if you are developing or selecting text embedding models and need to access raw, standardized evaluation results to understand their real-world performance.

Not ideal if you are looking for a tool to train or fine-tune embedding models, as this project focuses solely on benchmarking results.

natural-language-processing machine-learning-engineering model-evaluation text-embeddings ai-model-selection
No License No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 25 / 25

How are scores calculated?

Stars

47

Forks

135

Language

Python

License

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/embeddings-benchmark/results"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.