isaacus-dev/mleb
The code used to evaluate embedding models on the Massive Legal Embedding Benchmark (MLEB).
This tool helps legal tech developers, researchers, and data scientists evaluate how well their legal text embedding models understand and reason about legal documents. You input your embedding model and relevant API keys, and it outputs a performance score across diverse legal datasets. This helps you understand your model's strengths and weaknesses in legal applications.
Use this if you are developing or fine-tuning AI models for legal applications and need to rigorously benchmark their performance on a comprehensive set of legal tasks and document types.
Not ideal if you are looking for the MLEB datasets themselves or general-purpose text embedding benchmarks outside the legal domain.
Stars
32
Forks
4
Language
Python
License
MIT
Category
Last pushed
Feb 24, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/isaacus-dev/mleb"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Compare
Higher-rated alternatives
embeddings-benchmark/mteb
MTEB: Massive Text Embedding Benchmark
harmonydata/harmony
The Harmony Python library: a research tool for psychologists to harmonise data and...
yannvgn/laserembeddings
LASER multilingual sentence embeddings as a pip package
embeddings-benchmark/results
Data for the MTEB leaderboard
Hironsan/awesome-embedding-models
A curated list of awesome embedding models tutorials, projects and communities.