local-LLM-with-RAG and Local-RAG-Cookbot

local-LLM-with-RAG
54
Established
Local-RAG-Cookbot
34
Emerging
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 22/25
Maintenance 2/25
Adoption 3/25
Maturity 16/25
Community 13/25
Stars: 271
Forks: 52
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 4
Forks: 2
Downloads:
Commits (30d): 0
Language: Python
License: GPL-3.0
No Package No Dependents
Stale 6m No Package No Dependents

About local-LLM-with-RAG

amscotti/local-LLM-with-RAG

Running local Language Language Models (LLM) to perform Retrieval-Augmented Generation (RAG)

This tool helps you privately ask complex questions about your own documents and get well-researched answers. You provide your documents (PDFs, Word files, etc.) and a question, and it uses a local AI to find and summarize the relevant information. It's ideal for analysts, researchers, or anyone needing to quickly extract information from a personal collection of files without sending them to external AI services.

personal-knowledge-base document-qa private-data-analysis local-ai-assistant information-retrieval

About Local-RAG-Cookbot

Violet-sword/Local-RAG-Cookbot

A Python project that deploys a Local RAG chatbot using Ollama API. Refines answers with internal RAG knowledge base, and uses both Embedding and LLM models.

Scores updated daily from GitHub, PyPI, and npm data. How scores work