rag-with-amazon-postgresql-using-pgvector-and-sagemaker and rag-with-amazon-opensearch-and-sagemaker
These are ecosystem siblings—both are reference implementations of RAG pipelines using SageMaker for embeddings and LLMs, but they demonstrate the pattern with different vector database backends (PostgreSQL with pgvector versus OpenSearch), allowing users to choose based on their existing infrastructure or requirements.
About rag-with-amazon-postgresql-using-pgvector-and-sagemaker
aws-samples/rag-with-amazon-postgresql-using-pgvector-and-sagemaker
Question Answering application with Large Language Models (LLMs) and Amazon Postgresql using pgvector
About rag-with-amazon-opensearch-and-sagemaker
aws-samples/rag-with-amazon-opensearch-and-sagemaker
Question Answering Generative AI application with Large Language Models (LLMs) and Amazon OpenSearch Service
This project helps you build an internal question-answering system for your business. You provide your company's documents, and it allows users to ask questions and receive accurate answers generated by an AI, drawing only from your provided information. This is ideal for knowledge managers, HR professionals, or anyone responsible for making large volumes of internal documentation easily searchable and digestible.
Scores updated daily from GitHub, PyPI, and npm data. How scores work