abhirockzz/local-llms-rag-cosmosdb

RAG application with LangChain and Local LLMs powered by Ollama

29
/ 100
Experimental

This project helps developers integrate local large language models (LLMs) with Azure Cosmos DB to build retrieval-augmented generation (RAG) applications. It takes your private data and LLM queries, combines them, and outputs relevant, context-aware responses. It's ideal for software engineers and data scientists looking to create AI applications that use their own data without relying solely on cloud-based LLMs.

No commits in the last 6 months.

Use this if you are a developer building AI applications and want to leverage your own data with local LLMs, using Azure Cosmos DB for vector search.

Not ideal if you are an end-user looking for a ready-to-use RAG application rather than a toolkit for building one.

AI-application-development data-private-LLM vector-database-integration local-LLM-deployment information-retrieval
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 7 / 25
Community 15 / 25

How are scores calculated?

Stars

13

Forks

4

Language

Python

License

Last pushed

Jul 29, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/abhirockzz/local-llms-rag-cosmosdb"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.