avnlp/rag-pipelines
Advanced RAG Pipelines and Evaluation
This project helps domain experts, such as medical researchers or financial analysts, build advanced systems that can answer complex questions using their specialized documents. It takes in large volumes of domain-specific text, like medical research papers or financial filings, processes them, and then outputs highly accurate, structured answers to user queries, complete with explanations. The target users are professionals who need to extract precise information from extensive, specialized document collections.
Use this if you need to build a robust question-answering system that can accurately retrieve and synthesize information from a large corpus of specialized documents in fields like medicine or finance.
Not ideal if you're looking for a simple, general-purpose chatbot or if your primary need is for conversational AI rather than precise information retrieval and synthesis from specific documents.
Stars
10
Forks
1
Language
Python
License
MIT
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/avnlp/rag-pipelines"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
vectara/open-rag-eval
RAG evaluation without the need for "golden answers"
DocAILab/XRAG
XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced...
HZYAI/RagScore
⚡️ The "1-Minute RAG Audit" — Generate QA datasets & evaluate RAG systems in Colab, Jupyter, or...
AIAnytime/rag-evaluator
A library for evaluating Retrieval-Augmented Generation (RAG) systems (The traditional ways).
microsoft/benchmark-qed
Automated benchmarking of Retrieval-Augmented Generation (RAG) systems