LLM-Implementation/private-rag-embeddinggemma

🔒 100% Private RAG Stack with EmbeddingGemma, SQLite-vec & Ollama - Zero Cost, Offline Capable

39
/ 100
Emerging

This project helps you create a completely private question-answering system using your own documents, running entirely on your laptop. You input your private documents or data, and it allows you to ask questions about that information, receiving answers generated by a local AI model. This is ideal for researchers, analysts, or anyone who needs to query sensitive information without sending it to cloud-based AI services.

No commits in the last 6 months.

Use this if you need to build a secure, offline question-answering system that processes sensitive documents or proprietary data without any external cloud services or API costs.

Not ideal if you need a solution for very large datasets that exceed your laptop's memory, or if you prefer a managed, cloud-based service for convenience.

private-data-analysis document-qa local-ai research-assist confidential-information
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 15 / 25
Community 17 / 25

How are scores calculated?

Stars

11

Forks

9

Language

Jupyter Notebook

License

Last pushed

Sep 10, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/LLM-Implementation/private-rag-embeddinggemma"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.