isaacus-dev/mleb

The code used to evaluate embedding models on the Massive Legal Embedding Benchmark (MLEB).

43
/ 100
Emerging

This tool helps legal tech developers, researchers, and data scientists evaluate how well their legal text embedding models understand and reason about legal documents. You input your embedding model and relevant API keys, and it outputs a performance score across diverse legal datasets. This helps you understand your model's strengths and weaknesses in legal applications.

Use this if you are developing or fine-tuning AI models for legal applications and need to rigorously benchmark their performance on a comprehensive set of legal tasks and document types.

Not ideal if you are looking for the MLEB datasets themselves or general-purpose text embedding benchmarks outside the legal domain.

legal-AI legal-tech natural-language-processing legal-research AI-model-evaluation
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 11 / 25

How are scores calculated?

Stars

32

Forks

4

Language

Python

License

MIT

Last pushed

Feb 24, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/isaacus-dev/mleb"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

Compare