fau-masters-collected-works-cgarbin/gpt-all-local
A "chat with your data" example: using a large language models (LLM) to interact with our own (local) data. Everything is local: the embedding model, the LLM, the vector database. This is an example of retrieval-augmented generation (RAG): we find relevant sections from our documents and pass it to the LLM as part of the prompt (see pics).
This project helps you chat with your own documents and get answers directly from your files, all on your personal computer. You feed it a collection of documents like PDFs or Word files, ask questions, and it provides answers based solely on your data. This is for anyone who needs to quickly find information within large sets of private documents without uploading them to external services.
Use this if you need to extract specific answers from your private documents, like research papers, reports, or legal texts, using a conversational interface.
Not ideal if you're looking for general knowledge, information outside of your provided documents, or if you need to process extremely large archives that exceed your computer's processing power.
Stars
30
Forks
5
Language
Python
License
—
Category
Last pushed
Jan 15, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/fau-masters-collected-works-cgarbin/gpt-all-local"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
biocypher/biochatter
Backend library for conversational AI in biomedicine
pgalko/BambooAI
A Python library powered by Language Models (LLMs) for conversational data discovery and analysis.
redis-developer/ArXivChatGuru
Use ArXiv ChatGuru to talk to research papers. This app uses LangChain, OpenAI, Streamlit, and...
7-docs/7-docs
Use local files or public GitHub repository as a source and ask questions through ChatGPT about it
Kedhareswer/QuantumPDF_ChatApp_VectorDB
QuantumPDF V1.3 enables intelligent conversations with PDF documents. Built with Next.js 15 and...