RAG-using-Llama3-Langchain-and-ChromaDB and Local-RAG-with-Ollama

These are competitors offering alternative implementations of the same local RAG stack architecture—both use LangChain and ChromaDB for document retrieval, but differ in their choice of LLM backend (Llama3 versus Ollama) to achieve fully local inference.

Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 2/25
Adoption 9/25
Maturity 7/25
Community 22/25
Stars: 131
Forks: 35
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stars: 76
Forks: 48
Downloads:
Commits (30d): 0
Language: Python
License:
Stale 6m No Package No Dependents
No License Stale 6m No Package No Dependents

About RAG-using-Llama3-Langchain-and-ChromaDB

GURPREETKAURJETHRA/RAG-using-Llama3-Langchain-and-ChromaDB

RAG using Llama3, Langchain and ChromaDB

This project helps you build a system that can answer questions about your specific documents, even if a general AI model hasn't been trained on them. You provide your own documents, and the system allows you to ask questions about their content, delivering accurate answers. It's designed for developers and AI engineers who need to create custom, knowledge-driven AI applications.

AI-engineering information-retrieval natural-language-processing custom-chatbot knowledge-management

About Local-RAG-with-Ollama

ThomasJanssen-tech/Local-RAG-with-Ollama

Build a 100% local Retrieval Augmented Generation (RAG) system with Python, LangChain, Ollama and ChromaDB!

This project helps Python developers build a custom chatbot that can answer questions based on their own documents. You feed it your documents, and it creates a question-answering system that runs entirely on your local machine. This is for developers who need to create specialized AI assistants without sending their data to external services.

local-AI chatbot-development data-privacy offline-AI custom-knowledge-base

Scores updated daily from GitHub, PyPI, and npm data. How scores work