beir-cellar/beir

A Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.

67
/ 100
Established

BEIR helps developers and researchers working with search engines and recommender systems compare the effectiveness of different information retrieval models. It takes various textual datasets and your trained retrieval model, then outputs standardized performance metrics like NDCG, MAP, and Recall, allowing you to understand how well your model retrieves relevant information.

2,105 stars. Used by 9 other packages. Available on PyPI.

Use this if you need to rigorously evaluate and compare different information retrieval models across a wide range of tasks and datasets.

Not ideal if you are looking for a pre-built search engine solution or don't intend to develop and test your own retrieval models.

information-retrieval search-engine-development recommender-systems natural-language-processing model-evaluation
Maintenance 6 / 25
Adoption 15 / 25
Maturity 25 / 25
Community 21 / 25

How are scores calculated?

Stars

2,105

Forks

235

Language

Python

License

Apache-2.0

Last pushed

Oct 16, 2025

Commits (30d)

0

Dependencies

3

Reverse dependents

9

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/beir-cellar/beir"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.