mteb and results
The first is the benchmark framework and evaluation suite, while the second is the results repository that populates the public leaderboard—they are complements that work together in a producer-consumer relationship.
About mteb
embeddings-benchmark/mteb
MTEB: Massive Text Embedding Benchmark
This tool helps machine learning engineers and researchers assess the quality and performance of different text embedding models. You provide a text embedding model and specific evaluation tasks (like text classification or retrieval). The output is a clear set of metrics showing how well the model performs on those tasks, allowing for informed comparison and selection of the best model.
About results
embeddings-benchmark/results
Data for the MTEB leaderboard
This project provides the underlying data for the MTEB leaderboard, which ranks different text embedding models. It takes in evaluation results from various models and outputs a structured dataset of their performance across different tasks. Anyone who needs to compare the effectiveness of different text embedding models for natural language processing tasks, such as machine learning engineers or NLP researchers, would use this.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work