oztrkoguz/RAG-Framework-Evaluation

This project aims to compare different Retrieval-Augmented Generation (RAG) frameworks in terms of speed and performance.

27
/ 100
Experimental

This project helps developers compare the effectiveness and speed of various Retrieval-Augmented Generation (RAG) frameworks. By taking a document and a large language model as input, it outputs benchmark results for different frameworks like LlamaIndex, Autogen, and Langchain. A developer building RAG applications would use this to choose the most suitable framework.

No commits in the last 6 months.

Use this if you are a developer needing to decide which RAG framework will offer the best speed and performance for your specific application.

Not ideal if you are an end-user looking for a ready-to-use RAG application rather than a tool for framework evaluation.

RAG development LLM application building framework comparison AI engineering natural language processing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

14

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Jul 28, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/oztrkoguz/RAG-Framework-Evaluation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.