LLM-Implementation/private-rag-embeddinggemma
🔒 100% Private RAG Stack with EmbeddingGemma, SQLite-vec & Ollama - Zero Cost, Offline Capable
This project helps you create a completely private question-answering system using your own documents, running entirely on your laptop. You input your private documents or data, and it allows you to ask questions about that information, receiving answers generated by a local AI model. This is ideal for researchers, analysts, or anyone who needs to query sensitive information without sending it to cloud-based AI services.
No commits in the last 6 months.
Use this if you need to build a secure, offline question-answering system that processes sensitive documents or proprietary data without any external cloud services or API costs.
Not ideal if you need a solution for very large datasets that exceed your laptop's memory, or if you prefer a managed, cloud-based service for convenience.
Stars
11
Forks
9
Language
Jupyter Notebook
License
—
Category
Last pushed
Sep 10, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/LLM-Implementation/private-rag-embeddinggemma"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
kreuzberg-dev/kreuzberg-surrealdb
Extract, chunk, and embed documents from 88+ formats directly into SurrealDB.
Vatsal-Founder/Hybrid-Search-with-LangChain-and-Pinecone
Hybrid search RAG system combining BM25 sparse + dense embeddings via LangChain and Pinecone 35%...
perzeuss/strapi-plugin-embeddings
A Strapi plugin for embedding support, utilizing Chroma as the database for embeddings. Use...
CL-lau/chroma-plus
the AI-native open-source embedding database for plus
AmanPriyanshu/YC-Dendrolinguistics
Cultivating linguistic forests from YC startup pitches using bio-inspired grammar trees to map...