mburaksayici/smallevals

smallevals — CPU-fast, GPU-blazing fast offline retrieval evaluation for RAG systems with tiny QA models.

35
/ 100
Emerging

This tool helps AI engineers and MLOps practitioners evaluate the retrieval accuracy of their Retrieval Augmented Generation (RAG) systems. It takes your existing vector database connection and embedding model as input. It automatically generates questions from your data chunks, attempts to retrieve the relevant chunks, and then calculates and visualizes the retrieval performance, helping you understand how well your RAG system finds the right information.

Available on PyPI.

Use this if you need a fast, local, and cost-effective way to measure and improve the quality of your RAG system's information retrieval.

Not ideal if you primarily need to evaluate the quality of the generated answers rather than the retrieval of relevant context.

AI Development MLOps Natural Language Processing RAG System Evaluation Vector Database Management
No License
Maintenance 6 / 25
Adoption 6 / 25
Maturity 14 / 25
Community 9 / 25

How are scores calculated?

Stars

18

Forks

2

Language

Python

License

Last pushed

Dec 04, 2025

Commits (30d)

0

Dependencies

28

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/mburaksayici/smallevals"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.