local-LLM-with-RAG and rag-architecture
These are ecosystem siblings: one provides a practical implementation pattern for running local LLMs with RAG, while the other offers an architectural framework that could underpin or inform such implementations.
About local-LLM-with-RAG
amscotti/local-LLM-with-RAG
Running local Language Language Models (LLM) to perform Retrieval-Augmented Generation (RAG)
This tool helps you privately ask complex questions about your own documents and get well-researched answers. You provide your documents (PDFs, Word files, etc.) and a question, and it uses a local AI to find and summarize the relevant information. It's ideal for analysts, researchers, or anyone needing to quickly extract information from a personal collection of files without sending them to external AI services.
About rag-architecture
jolual2747/rag-architecture
RAG Architecture for Modern Chatbots
This project helps you build intelligent chatbots that can answer questions accurately by looking up information from your own documents. You provide documents (like manuals, reports, or articles) as input, and the chatbot provides specific, contextually relevant answers as output. This is ideal for anyone needing to deploy a smart question-answering system or virtual assistant for customer support, internal knowledge bases, or information retrieval.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work