vector-db-benchmark and Vector-Arena

These are competitors offering overlapping benchmarking frameworks for vector databases, with Qdrant's tool being more established (353 stars vs 1) and Vector-Arena attempting to differentiate through multiprocessing isolation and more granular latency metrics (diverse, sequential, filtered, bulk search).

vector-db-benchmark
61
Established
Vector-Arena
23
Experimental
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 13/25
Adoption 1/25
Maturity 9/25
Community 0/25
Stars: 353
Forks: 139
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 1
Forks:
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
No Package No Dependents

About vector-db-benchmark

qdrant/vector-db-benchmark

Framework for benchmarking vector search engines

This framework helps you compare the speed and efficiency of different vector search engines. It takes a vector search engine, a dataset (like embeddings for text or images), and a defined test scenario as input. It then measures how well and how quickly the engine performs, providing results to help you choose the best one for your specific needs. This is ideal for machine learning engineers, MLOps specialists, or anyone building or deploying applications that rely on fast and accurate vector search.

vector-search MLOps search-engine-evaluation database-benchmarking embedding-retrieval

About Vector-Arena

M4iKZ/Vector-Arena

A comprehensive, multiprocessing-isolated benchmark for evaluating vector database performance and quality. Measures insertion speed, search latency (diverse, sequential, filtered, and bulk), recall accuracy, and memory usage across standard (ChromaDB, LanceDB, Qdrant, FAISS, USearch) and custom engine implementations.

Scores updated daily from GitHub, PyPI, and npm data. How scores work