ObsidianRAG and ragbase
These are competing local RAG systems, with one focusing on querying Obsidian notes and the other offering a more general document chat UI with advanced RAG features like reranking and semantic chunking.
About ObsidianRAG
Vasallo94/ObsidianRAG
RAG system to query your Obsidian notes using LangGraph and local LLMs (Ollama)
This tool allows Obsidian users to ask questions in plain language and get intelligent answers directly from their personal notes. You provide your existing Obsidian vault, and the system uses local AI to retrieve and synthesize information, giving you a clear answer along with links to the original notes as sources. This is perfect for knowledge workers, researchers, or anyone who maintains a large personal knowledge base in Obsidian.
About ragbase
curiousily/ragbase
Completely local RAG. Chat with your PDF documents (with open LLM) and UI to that uses LangChain, Streamlit, Ollama (Llama 3.1), Qdrant and advanced methods like reranking and semantic chunking.
This tool helps you privately and securely chat with your PDF documents, allowing you to ask questions and get answers without your information ever leaving your computer. You provide your PDF files, and the system processes them to create a searchable knowledge base. The output is conversational answers to your questions, drawing directly from your uploaded documents. It's ideal for researchers, analysts, or anyone who needs to extract information from multiple documents while maintaining strict data privacy.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work