mickymultani/RAG-with-Cross-Encoder-Reranker

Testing speed and accuracy of RAG with, and without Cross Encoder Reranker.

25
/ 100
Experimental

This project helps AI developers understand the trade-offs when building Retrieval-Augmented Generation (RAG) systems. It compares the accuracy and speed of RAG models when retrieving information from lengthy documents, with and without a 'Cross Encoder Reranker.' The input is a long document and questions; the output is an answer along with performance metrics. This is for developers building AI chatbots or knowledge retrieval systems that need to answer questions from specific documents.

No commits in the last 6 months.

Use this if you are developing a RAG-based AI application and need to decide whether to prioritize response speed or the accuracy and contextual understanding of answers drawn from extensive documents.

Not ideal if you are looking for a ready-to-use RAG solution, as this project focuses on evaluating underlying architectural choices rather than providing an end-user application.

AI-development natural-language-processing information-retrieval question-answering-systems AI-model-evaluation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 9 / 25

How are scores calculated?

Stars

49

Forks

4

Language

Jupyter Notebook

License

Last pushed

Jan 12, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/mickymultani/RAG-with-Cross-Encoder-Reranker"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.