ALucek/custom-rag-evals
Applying domain specific evaluations to RAG chunking and embedding functions
This project helps you optimize how your documents are prepared for a Retrieval Augmented Generation (RAG) system. It takes your specific documents and various text splitting and embedding methods, then tells you which combination provides the best results for accurately retrieving information. This is for AI developers or data scientists building custom RAG applications who need to ensure high-quality information retrieval.
No commits in the last 6 months.
Use this if you are building a RAG system and need to determine the most effective way to break down your unique documents and embed them for optimal information retrieval.
Not ideal if you are looking for a pre-built RAG application or a simple plug-and-play solution without needing to evaluate underlying strategies.
Stars
18
Forks
3
Language
Jupyter Notebook
License
—
Category
Last pushed
Dec 25, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/ALucek/custom-rag-evals"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
vectara/open-rag-eval
RAG evaluation without the need for "golden answers"
DocAILab/XRAG
XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced...
HZYAI/RagScore
⚡️ The "1-Minute RAG Audit" — Generate QA datasets & evaluate RAG systems in Colab, Jupyter, or...
AIAnytime/rag-evaluator
A library for evaluating Retrieval-Augmented Generation (RAG) systems (The traditional ways).
microsoft/benchmark-qed
Automated benchmarking of Retrieval-Augmented Generation (RAG) systems