tyrell/llm-ollama-llamaindex-bootstrap
Designed for offline use, this RAG application template offers a starting point for building your own local RAG pipeline, independent of online APIs and cloud-based LLM services like OpenAI.
This project helps developers build question-answering applications that work without internet access. It takes your text documents as input and allows users to ask questions, providing relevant answers drawn directly from your private data. It's designed for developers who want to create secure, offline AI solutions.
No commits in the last 6 months.
Use this if you are a developer looking for a ready-to-go template to build a local, offline Retrieval-Augmented Generation (RAG) application using your own data.
Not ideal if you are an end-user looking for a ready-to-use application without any development work.
Stars
48
Forks
17
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/tyrell/llm-ollama-llamaindex-bootstrap"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
RapidAI/RapidRAG
QA based on local knowledge and LLM.
benitomartin/substack-newsletters-search-course
Production RAG System Course
liweiphys/layra
LAYRA—an enterprise-ready, out-of-the-box solution—unlocks next-generation intelligent systems...
LHRLAB/HyperGraphRAG
[NeurIPS 2025] Official resources of "HyperGraphRAG: Retrieval-Augmented Generation via...
limanmys/sef
On premise enterprise-grade RAG-powered agentic workflow chatbot platform with multi-provider support