amscotti/local-LLM-with-RAG

Running local Language Language Models (LLM) to perform Retrieval-Augmented Generation (RAG)

54
/ 100
Established

This tool helps you privately ask complex questions about your own documents and get well-researched answers. You provide your documents (PDFs, Word files, etc.) and a question, and it uses a local AI to find and summarize the relevant information. It's ideal for analysts, researchers, or anyone needing to quickly extract information from a personal collection of files without sending them to external AI services.

271 stars.

Use this if you need to reliably query your private document collection with an AI that can intelligently decide when and how to search for information, all running on your own computer.

Not ideal if you need to use very small AI models (under 8 billion parameters) or if your AI model doesn't support 'tool calling' capabilities for searching documents.

personal-knowledge-base document-qa private-data-analysis local-ai-assistant information-retrieval
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

271

Forks

52

Language

Python

License

MIT

Last pushed

Jan 02, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/amscotti/local-LLM-with-RAG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.