romilandc/llama-index-RAG
A RAG implementation on Llama Index using Qdrant vector stores as storage. Take some pdfs, store them in the db, use LLM to inference.
This helps researchers, analysts, or anyone working with large collections of PDF documents quickly find specific information and generate summaries or answers. You feed it a set of PDF files, and it allows you to ask questions in plain language, extracting relevant details or creating new content based on your documents. It's for professionals who need to synthesize information from many PDFs efficiently without manually sifting through each one.
No commits in the last 6 months.
Use this if you need to quickly get answers or generate content from a personal library of PDF documents.
Not ideal if you need to process live web data, structured databases, or if your primary goal is visual analysis of documents rather than text extraction.
Stars
15
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 04, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/romilandc/llama-index-RAG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
yichuan-w/LEANN
[MLsys2026]: RAG on Everything with LEANN. Enjoy 97% storage savings while running a fast,...
byerlikaya/SmartRAG
Multi-Modal RAG for .NET — query databases, documents, images and audio in natural language....
aws-samples/layout-aware-document-processing-and-retrieval-augmented-generation
Advanced document extraction and chunking techniques for retrieval augmented generation that is...
sourangshupal/simple-rag-langchain
Exploring the Basics of Langchain
sion42x/llama-index-milvus-example
Open AI APIs with Llama Index and Milvus Vector DB for Retrieval Augmented Generation (RAG) testing