local-LLM-with-RAG and Local-RAG-Cookbot
About local-LLM-with-RAG
amscotti/local-LLM-with-RAG
Running local Language Language Models (LLM) to perform Retrieval-Augmented Generation (RAG)
This tool helps you privately ask complex questions about your own documents and get well-researched answers. You provide your documents (PDFs, Word files, etc.) and a question, and it uses a local AI to find and summarize the relevant information. It's ideal for analysts, researchers, or anyone needing to quickly extract information from a personal collection of files without sending them to external AI services.
About Local-RAG-Cookbot
Violet-sword/Local-RAG-Cookbot
A Python project that deploys a Local RAG chatbot using Ollama API. Refines answers with internal RAG knowledge base, and uses both Embedding and LLM models.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work