rag-with-amazon-postgresql-using-pgvector-and-sagemaker and rag-with-amazon-opensearch-and-sagemaker

These are ecosystem siblings—both are reference implementations of RAG pipelines using SageMaker for embeddings and LLMs, but they demonstrate the pattern with different vector database backends (PostgreSQL with pgvector versus OpenSearch), allowing users to choose based on their existing infrastructure or requirements.

Maintenance 0/25
Adoption 6/25
Maturity 16/25
Community 15/25
Maintenance 0/25
Adoption 7/25
Maturity 16/25
Community 9/25
Stars: 16
Forks: 5
Downloads:
Commits (30d): 0
Language: Python
License: MIT-0
Stars: 29
Forks: 3
Downloads:
Commits (30d): 0
Language: Python
License: MIT-0
Archived Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About rag-with-amazon-postgresql-using-pgvector-and-sagemaker

aws-samples/rag-with-amazon-postgresql-using-pgvector-and-sagemaker

Question Answering application with Large Language Models (LLMs) and Amazon Postgresql using pgvector

About rag-with-amazon-opensearch-and-sagemaker

aws-samples/rag-with-amazon-opensearch-and-sagemaker

Question Answering Generative AI application with Large Language Models (LLMs) and Amazon OpenSearch Service

This project helps you build an internal question-answering system for your business. You provide your company's documents, and it allows users to ask questions and receive accurate answers generated by an AI, drawing only from your provided information. This is ideal for knowledge managers, HR professionals, or anyone responsible for making large volumes of internal documentation easily searchable and digestible.

enterprise-search knowledge-management internal-documentation information-retrieval business-intelligence

Scores updated daily from GitHub, PyPI, and npm data. How scores work