fahdmirza/doclingwithollama
Docling with Ollama - RAG on Local Files with Local Models
This tool helps you quickly get answers from your own documents by allowing you to chat with them in a secure, local environment. You upload PDF files (or other supported formats), and it provides a chat interface to ask questions and receive answers based on the content of your uploaded documents. This is ideal for researchers, analysts, or anyone who needs to extract information from their private documents without sending them to external services.
No commits in the last 6 months.
Use this if you need to quickly find information and ask questions about your personal or sensitive PDF documents using an AI, all while keeping your data private and on your own computer.
Not ideal if you need to analyze very large collections of documents, integrate with existing enterprise systems, or collaborate on document analysis with a team.
Stars
87
Forks
18
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/fahdmirza/doclingwithollama"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
run-llama/llama_index
LlamaIndex is the leading document agent and OCR platform
emarco177/documentation-helper
Reference implementation of a RAG-based documentation helper using LangChain, Pinecone, and Tavily..
janus-llm/janus-llm
Leveraging LLMs for modernization through intelligent chunking, iterative prompting and...
JetXu-LLM/llama-github
Llama-github is an open-source Python library that empowers LLM Chatbots, AI Agents, and...
Vasallo94/ObsidianRAG
RAG system to query your Obsidian notes using LangGraph and local LLMs (Ollama)