relari-ai/continuous-eval

Data-Driven Evaluation for LLM-Powered Applications

41
/ 100
Emerging

This tool helps AI engineers and MLOps professionals rigorously test and refine their Large Language Model (LLM) applications. It takes in datasets of questions, retrieved contexts, and generated answers, then outputs comprehensive performance metrics. You'd use this to understand how well your LLM application is performing across different stages, like retrieval or generation, and identify areas for improvement.

516 stars. No commits in the last 6 months.

Use this if you are developing or managing LLM-powered applications and need a systematic, data-driven way to evaluate their performance.

Not ideal if you are looking for a simple, one-off evaluation for a single LLM prompt without an overarching application or data pipeline.

LLM-development AI-evaluation MLOps RAG-systems AI-testing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

516

Forks

37

Language

Python

License

Apache-2.0

Last pushed

Jan 22, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/relari-ai/continuous-eval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.