romilandc/llama-index-RAG

A RAG implementation on Llama Index using Qdrant vector stores as storage. Take some pdfs, store them in the db, use LLM to inference.

27
/ 100
Experimental

This helps researchers, analysts, or anyone working with large collections of PDF documents quickly find specific information and generate summaries or answers. You feed it a set of PDF files, and it allows you to ask questions in plain language, extracting relevant details or creating new content based on your documents. It's for professionals who need to synthesize information from many PDFs efficiently without manually sifting through each one.

No commits in the last 6 months.

Use this if you need to quickly get answers or generate content from a personal library of PDF documents.

Not ideal if you need to process live web data, structured databases, or if your primary goal is visual analysis of documents rather than text extraction.

document-analysis information-retrieval research-automation knowledge-management content-synthesis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

15

Forks

1

Language

Python

License

Apache-2.0

Category

local-rag-stacks

Last pushed

Apr 04, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/romilandc/llama-index-RAG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.