aws-samples/simplified-corrective-rag
How to build a simplified Corrective RAG assistant with Amazon Bedrock using LLMs, Embeddings model, Knowledge Bases for Amazon Bedrock, and Agents for Amazon Bedrock.
This project helps developers build more reliable AI assistants by addressing a common problem where large language models (LLMs) might 'hallucinate' or provide incorrect information. It takes an existing knowledge base and a user query, and if the knowledge base doesn't have the answer, it automatically performs a web search to find accurate information. This is for AI solution architects or machine learning engineers building generative AI applications who need to ensure accuracy.
No commits in the last 6 months.
Use this if you are developing generative AI applications with Amazon Bedrock and need to ensure your AI assistants provide accurate, factual responses even when internal knowledge bases are incomplete.
Not ideal if you are looking for an off-the-shelf chatbot or a solution that doesn't require setting up AWS cloud resources and development environments.
Stars
16
Forks
4
Language
Jupyter Notebook
License
MIT-0
Category
Last pushed
May 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/aws-samples/simplified-corrective-rag"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
aws-samples/generative-ai-use-cases
Application implementation with business use cases for safely utilizing generative AI in...
aws-samples/serverless-rag-demo
Amazon Bedrock Foundation models with Amazon Opensearch Serverless as a Vector DB
aws-samples/amazon-bedrock-rag
Fully managed RAG solution implemented using Knowledge Bases for Amazon Bedrock
IBM/granite-workshop
Source code for the IBM Granite AI Model Workshop
aws-samples/rag-with-amazon-bedrock-and-opensearch
Opinionated sample on how to build and deploy a RAG application with Amazon Bedrock and OpenSearch