tyrell/llm-ollama-llamaindex-bootstrap

Designed for offline use, this RAG application template offers a starting point for building your own local RAG pipeline, independent of online APIs and cloud-based LLM services like OpenAI.

43
/ 100
Emerging

This project helps developers build question-answering applications that work without internet access. It takes your text documents as input and allows users to ask questions, providing relevant answers drawn directly from your private data. It's designed for developers who want to create secure, offline AI solutions.

No commits in the last 6 months.

Use this if you are a developer looking for a ready-to-go template to build a local, offline Retrieval-Augmented Generation (RAG) application using your own data.

Not ideal if you are an end-user looking for a ready-to-use application without any development work.

local-AI-development offline-data-processing private-data-querying LLM-application-development data-ingestion
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

48

Forks

17

Language

Python

License

Apache-2.0

Category

rag-applications

Last pushed

Feb 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/tyrell/llm-ollama-llamaindex-bootstrap"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.