machinelearningZH/semantic-search-eval
A framework for evaluating semantic search across custom datasets, metrics, and embedding backends.
This tool helps data scientists and ML engineers compare different semantic search models like OpenAI, HuggingFace, or BM25 to find the best fit for their specific document collections. It takes your documents and search queries, evaluates how well each model retrieves relevant information, and outputs performance metrics and visualizations. This is for professionals building or deploying intelligent search solutions who need to rigorously test and select the most effective underlying technology.
No commits in the last 6 months.
Use this if you need to objectively compare the performance of various semantic search models on your own datasets and determine which one is most suitable for your application.
Not ideal if you don't have existing documents and corresponding test queries, or if you need a tool that handles document preprocessing (like cleaning or chunking) before evaluation.
Stars
38
Forks
6
Language
Python
License
MIT
Category
Last pushed
May 26, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/machinelearningZH/semantic-search-eval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DmitryKey/bert-solr-search
Search with BERT vectors in Solr, Elasticsearch, OpenSearch and GSI APU
tkhang1999/semantic-food-search
A semantic food search web application built with Django, Solr, SBERT, and Docker
alihakimtaskiran/SemanticSearch
Meaningful Search
yberreby/ocaml-semsearch-jsoo
OCaml + js_of_ocaml + SBERT + TensorFlow.js
sabirdvd/sts-papers
Paper and survey of the papers surrounding semantic simiarity task