Eustema-S-p-A/SCARF
SCARF (System for Comprehensive Assessment of RAG Frameworks) is a modular evaluation framework for benchmarking deployed Retrieval Augmented Generation (RAG) applications. It offers end-to-end, black-box assessment across multiple configurations, supports automated testing with several vector databases and LLMs.
This tool helps AI engineers and machine learning practitioners systematically evaluate the performance of their Retrieval Augmented Generation (RAG) applications. It takes your deployed RAG systems, along with various configurations like different vector databases and large language models, and outputs detailed reports on their factual accuracy, contextual relevance, and response coherence. You'd use this to compare and fine-tune different versions of your RAG applications.
No commits in the last 6 months.
Use this if you need to objectively compare multiple versions of your RAG application or understand how changes to its components impact its real-world performance.
Not ideal if you are looking for a tool to build or deploy RAG applications, as this is solely for evaluation.
Stars
7
Forks
—
Language
Python
License
AGPL-3.0
Category
Last pushed
Apr 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/Eustema-S-p-A/SCARF"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
vectara/open-rag-eval
RAG evaluation without the need for "golden answers"
DocAILab/XRAG
XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced...
HZYAI/RagScore
⚡️ The "1-Minute RAG Audit" — Generate QA datasets & evaluate RAG systems in Colab, Jupyter, or...
AIAnytime/rag-evaluator
A library for evaluating Retrieval-Augmented Generation (RAG) systems (The traditional ways).
microsoft/benchmark-qed
Automated benchmarking of Retrieval-Augmented Generation (RAG) systems