nuclia/nuclia-eval

Library for evaluating RAG using Nuclia's models

43
/ 100
Emerging

This tool helps evaluate the performance of your RAG (Retrieval Augmented Generation) applications. You provide a question, the answer generated by your RAG system, and the source documents (context) it used. The tool then assesses how relevant the answer is to the question, how relevant each source document is to the question, and whether the answer is truly supported by the source documents. This is for developers and AI engineers building and refining RAG systems.

No commits in the last 6 months. Available on PyPI.

Use this if you are developing a RAG application and need to objectively measure the quality of its generated answers and the retrieved context.

Not ideal if you are a business user looking for a simple pass/fail judgment on a RAG system without getting into the technical evaluation metrics.

RAG evaluation LLM development AI quality assurance natural language processing information retrieval
Stale 6m
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 12 / 25

How are scores calculated?

Stars

18

Forks

3

Language

Python

License

MIT

Last pushed

Jul 31, 2024

Commits (30d)

0

Dependencies

5

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/nuclia/nuclia-eval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.