SciPhi-AI/RAG-Performance
Measuring RAG solutions throughput and latency
This tool helps RAG (Retrieval-Augmented Generation) solution developers compare the performance of different RAG frameworks when ingesting data. It takes common RAG frameworks and benchmark datasets (like Wikipedia articles or various text/PDF files) as input. It then measures and outputs key performance metrics such as data ingestion time, tokens processed per second, and megabytes processed per second, helping developers choose the most efficient framework for their specific application.
No commits in the last 6 months.
Use this if you are a developer building RAG solutions and need to compare how different RAG frameworks handle data ingestion and throughput.
Not ideal if you are an end-user simply looking to apply an existing RAG solution, rather than evaluating the underlying frameworks.
Stars
19
Forks
6
Language
Python
License
MIT
Category
Last pushed
Jul 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/SciPhi-AI/RAG-Performance"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Compare
Higher-rated alternatives
vectara/open-rag-eval
RAG evaluation without the need for "golden answers"
DocAILab/XRAG
XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced...
HZYAI/RagScore
⚡️ The "1-Minute RAG Audit" — Generate QA datasets & evaluate RAG systems in Colab, Jupyter, or...
AIAnytime/rag-evaluator
A library for evaluating Retrieval-Augmented Generation (RAG) systems (The traditional ways).
microsoft/benchmark-qed
Automated benchmarking of Retrieval-Augmented Generation (RAG) systems