machinelearningZH/semantic-search-eval

A framework for evaluating semantic search across custom datasets, metrics, and embedding backends.

38
/ 100
Emerging

This tool helps data scientists and ML engineers compare different semantic search models like OpenAI, HuggingFace, or BM25 to find the best fit for their specific document collections. It takes your documents and search queries, evaluates how well each model retrieves relevant information, and outputs performance metrics and visualizations. This is for professionals building or deploying intelligent search solutions who need to rigorously test and select the most effective underlying technology.

No commits in the last 6 months.

Use this if you need to objectively compare the performance of various semantic search models on your own datasets and determine which one is most suitable for your application.

Not ideal if you don't have existing documents and corresponding test queries, or if you need a tool that handles document preprocessing (like cleaning or chunking) before evaluation.

information-retrieval document-search model-evaluation text-analytics ML-operations
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 14 / 25

How are scores calculated?

Stars

38

Forks

6

Language

Python

License

MIT

Last pushed

May 26, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/machinelearningZH/semantic-search-eval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.