relari-ai/continuous-eval
Data-Driven Evaluation for LLM-Powered Applications
This tool helps AI engineers and MLOps professionals rigorously test and refine their Large Language Model (LLM) applications. It takes in datasets of questions, retrieved contexts, and generated answers, then outputs comprehensive performance metrics. You'd use this to understand how well your LLM application is performing across different stages, like retrieval or generation, and identify areas for improvement.
516 stars. No commits in the last 6 months.
Use this if you are developing or managing LLM-powered applications and need a systematic, data-driven way to evaluate their performance.
Not ideal if you are looking for a simple, one-off evaluation for a single LLM prompt without an overarching application or data pipeline.
Stars
516
Forks
37
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/relari-ai/continuous-eval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
modelscope/evalscope
A streamlined and customizable framework for efficient large model (LLM, VLM, AIGC) evaluation...
izam-mohammed/ragrank
🎯 Your free LLM evaluation toolkit helps you assess the accuracy of facts, how well it...
Kareem-Rashed/rubric-eval
Independent framework to test, benchmark, and evaluate LLMs & AI agents locally.
justplus/llm-eval
大语言模型评估平台,支持多种评估基准、自定义数据集和性能测试。支持基于自定义数据集的RAG评估。
cleanlab/tlm
Score the trustworthiness of outputs from any LLM in real-time