ThomasJanssen-tech/Local-RAG-with-Ollama
Build a 100% local Retrieval Augmented Generation (RAG) system with Python, LangChain, Ollama and ChromaDB!
This project helps Python developers build a custom chatbot that can answer questions based on their own documents. You feed it your documents, and it creates a question-answering system that runs entirely on your local machine. This is for developers who need to create specialized AI assistants without sending their data to external services.
No commits in the last 6 months.
Use this if you are a Python developer and want to create a private, offline chatbot that can answer questions using your specific local data.
Not ideal if you're not a developer or need a pre-built, production-ready chatbot solution.
Stars
76
Forks
48
Language
Python
License
—
Category
Last pushed
May 30, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/ThomasJanssen-tech/Local-RAG-with-Ollama"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
run-llama/llama_index
LlamaIndex is the leading document agent and OCR platform
emarco177/documentation-helper
Reference implementation of a RAG-based documentation helper using LangChain, Pinecone, and Tavily..
janus-llm/janus-llm
Leveraging LLMs for modernization through intelligent chunking, iterative prompting and...
JetXu-LLM/llama-github
Llama-github is an open-source Python library that empowers LLM Chatbots, AI Agents, and...
Vasallo94/ObsidianRAG
RAG system to query your Obsidian notes using LangGraph and local LLMs (Ollama)