fau-masters-collected-works-cgarbin/gpt-all-local

A "chat with your data" example: using a large language models (LLM) to interact with our own (local) data. Everything is local: the embedding model, the LLM, the vector database. This is an example of retrieval-augmented generation (RAG): we find relevant sections from our documents and pass it to the LLM as part of the prompt (see pics).

38
/ 100
Emerging

This project helps you chat with your own documents and get answers directly from your files, all on your personal computer. You feed it a collection of documents like PDFs or Word files, ask questions, and it provides answers based solely on your data. This is for anyone who needs to quickly find information within large sets of private documents without uploading them to external services.

Use this if you need to extract specific answers from your private documents, like research papers, reports, or legal texts, using a conversational interface.

Not ideal if you're looking for general knowledge, information outside of your provided documents, or if you need to process extremely large archives that exceed your computer's processing power.

document-qa private-data-analysis information-retrieval knowledge-base-query local-search
No License No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 13 / 25

How are scores calculated?

Stars

30

Forks

5

Language

Python

License

Last pushed

Jan 15, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/fau-masters-collected-works-cgarbin/gpt-all-local"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.