arjbingly/grag

GRAG is a simple python package that provides an easy end-to-end solution for implementing Retrieval Augmented Generation (RAG). The package offers an easy way for running various LLMs locally, Thanks to LlamaCpp and also supports vector stores like Chroma and DeepLake.

36
/ 100
Emerging

This is a tool for developers looking to quickly build and deploy a Retrieval Augmented Generation (RAG) system. It takes your documents and a large language model (LLM) as input, then allows the LLM to provide more informed and accurate answers by retrieving relevant information from your documents. Developers building AI applications, chatbots, or Q&A systems would find this useful.

No commits in the last 6 months. Available on PyPI.

Use this if you are a developer and want an easy, end-to-end solution to integrate RAG into your AI applications, especially if you plan to run LLMs locally.

Not ideal if you are an end-user without programming experience, as this is a developer tool requiring Python knowledge.

AI application development Natural language processing Information retrieval LLM integration Document-based Q&A
Stale 6m
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 5 / 25

How are scores calculated?

Stars

15

Forks

1

Language

Python

License

AGPL-3.0

Last pushed

May 11, 2024

Commits (30d)

0

Dependencies

19

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/arjbingly/grag"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.