dronefreak/local_rag_pipeline
An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.
This tool helps researchers, analysts, or anyone working with large collections of documents to quickly find answers to their specific questions. You feed it your own PDFs, text files, and CSVs, and it allows you to ask questions in plain language, receiving concise answers drawn directly from your documents. This is ideal for professionals needing to extract precise information from extensive private datasets, like technical specifications or research papers.
No commits in the last 6 months.
Use this if you need to build a private, intelligent Q&A system for your own documents that runs entirely on your local computer, ensuring data privacy and quick responses.
Not ideal if you need a publicly available chatbot, do not have access to a dedicated NVIDIA GPU, or primarily work with very small, easily searchable document sets.
Stars
8
Forks
2
Language
Python
License
MIT
Category
Last pushed
Aug 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/dronefreak/local_rag_pipeline"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
run-llama/llama_index
LlamaIndex is the leading document agent and OCR platform
emarco177/documentation-helper
Reference implementation of a RAG-based documentation helper using LangChain, Pinecone, and Tavily..
janus-llm/janus-llm
Leveraging LLMs for modernization through intelligent chunking, iterative prompting and...
JetXu-LLM/llama-github
Llama-github is an open-source Python library that empowers LLM Chatbots, AI Agents, and...
Vasallo94/ObsidianRAG
RAG system to query your Obsidian notes using LangGraph and local LLMs (Ollama)