amscotti/local-LLM-with-RAG
Running local Language Language Models (LLM) to perform Retrieval-Augmented Generation (RAG)
This tool helps you privately ask complex questions about your own documents and get well-researched answers. You provide your documents (PDFs, Word files, etc.) and a question, and it uses a local AI to find and summarize the relevant information. It's ideal for analysts, researchers, or anyone needing to quickly extract information from a personal collection of files without sending them to external AI services.
271 stars.
Use this if you need to reliably query your private document collection with an AI that can intelligently decide when and how to search for information, all running on your own computer.
Not ideal if you need to use very small AI models (under 8 billion parameters) or if your AI model doesn't support 'tool calling' capabilities for searching documents.
Stars
271
Forks
52
Language
Python
License
MIT
Category
Last pushed
Jan 02, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/amscotti/local-LLM-with-RAG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
watat83/document-chat-system
Open-source document chat platform with semantic search, RAG (Retrieval Augmented Generation),...
ranfysvalle02/Interactive-RAG
An interactive RAG agent built with LangChain and MongoDB Atlas. Manage your knowledge base,...
ChatFAQ/ChatFAQ
Open-source ecosystem for building AI-powered conversational solutions using RAG, agents, FSMs, and LLMs.
MFYDev/odoo-expert
RAG-powered documentation assistant that converts, processes, and provides semantic search...
zilliztech/akcio
Akcio is a demonstration project for Retrieval Augmented Generation (RAG). It leverages the...