local-LLM-with-RAG and RAG-MultiFile-QA

These are ecosystem siblings—both implement RAG pipelines for document QA, with A providing a framework for running local LLMs and B providing a multi-file document interface, often using shared infrastructure like LangChain and embedding models.

local-LLM-with-RAG
54
Established
RAG-MultiFile-QA
45
Emerging
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 22/25
Maintenance 10/25
Adoption 4/25
Maturity 16/25
Community 15/25
Stars: 271
Forks: 52
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 5
Forks: 4
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
No Package No Dependents

About local-LLM-with-RAG

amscotti/local-LLM-with-RAG

Running local Language Language Models (LLM) to perform Retrieval-Augmented Generation (RAG)

This tool helps you privately ask complex questions about your own documents and get well-researched answers. You provide your documents (PDFs, Word files, etc.) and a question, and it uses a local AI to find and summarize the relevant information. It's ideal for analysts, researchers, or anyone needing to quickly extract information from a personal collection of files without sending them to external AI services.

personal-knowledge-base document-qa private-data-analysis local-ai-assistant information-retrieval

About RAG-MultiFile-QA

Uni-Creator/RAG-MultiFile-QA

A RAG (Retrieval-Augmented Generation) AI chatbot that allows users to upload multiple document types (PDF, DOCX, TXT, CSV) and ask questions about the content. Built using LangChain, Hugging Face embeddings, and Streamlit, it enables efficient document search and question answering using vector-based retrieval. 🚀

Scores updated daily from GitHub, PyPI, and npm data. How scores work