SR-Sujon/llamachirp
Engage in dynamic conversations with PDFs to extract and comprehend information using locally hosted LLM variants of Ollama by integrating RAG.
This helps you quickly understand and extract specific information from long PDF documents by having a natural conversation with them. You provide a PDF, ask questions about its content, and receive concise answers. This is ideal for researchers, analysts, or anyone who needs to efficiently get answers from complex reports or articles without reading every page.
No commits in the last 6 months.
Use this if you need to quickly find answers or summarize information from one or more PDF documents through a chat interface.
Not ideal if you need to analyze highly structured data in tables or perform complex data transformations, or if you don't want to run software locally.
Stars
7
Forks
2
Language
Python
License
—
Category
Last pushed
May 07, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/SR-Sujon/llamachirp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vndee/local-assistant-examples
Build your own ChatPDF and run it locally
datvodinh/rag-chatbot
Chat with multiple PDFs locally
shibing624/ChatPDF
RAG for Local LLM, chat with PDF/doc/txt files, ChatPDF....
couchbase-examples/rag-demo
A RAG demo using LangChain that allows you to chat with your uploaded PDF documents
Isa1asN/local-rag
Local rag using ollama, langchain and chroma.