0xshre/rag-evaluation

A QA RAG system that uses a custom chromadb to retrieve relevant passages and then uses an LLM to generate the answer.

29
/ 100
Experimental

This project helps evaluate and improve question-answering systems built using Retrieval-Augmented Generation (RAG). You feed in documents and questions, and it generates answers while also providing a detailed report on how accurate and relevant the answers are. It's for data scientists and AI engineers who are developing or fine-tuning RAG-based chatbots or knowledge retrieval tools.

No commits in the last 6 months.

Use this if you are building or evaluating a RAG-based question-answering system and need to understand its performance in terms of answer quality and context utilization.

Not ideal if you are looking for a ready-to-use, off-the-shelf chatbot without needing to delve into RAG system performance metrics.

AI-development Natural-Language-Processing Knowledge-retrieval ML-evaluation Question-Answering-systems
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

17

Forks

4

Language

Jupyter Notebook

License

Last pushed

Feb 28, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/0xshre/rag-evaluation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.