aws-samples/simplified-corrective-rag

How to build a simplified Corrective RAG assistant with Amazon Bedrock using LLMs, Embeddings model, Knowledge Bases for Amazon Bedrock, and Agents for Amazon Bedrock.

37
/ 100
Emerging

This project helps developers build more reliable AI assistants by addressing a common problem where large language models (LLMs) might 'hallucinate' or provide incorrect information. It takes an existing knowledge base and a user query, and if the knowledge base doesn't have the answer, it automatically performs a web search to find accurate information. This is for AI solution architects or machine learning engineers building generative AI applications who need to ensure accuracy.

No commits in the last 6 months.

Use this if you are developing generative AI applications with Amazon Bedrock and need to ensure your AI assistants provide accurate, factual responses even when internal knowledge bases are incomplete.

Not ideal if you are looking for an off-the-shelf chatbot or a solution that doesn't require setting up AWS cloud resources and development environments.

AI application development Generative AI accuracy Large Language Model (LLM) reliability AWS Bedrock solutions Information retrieval
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

16

Forks

4

Language

Jupyter Notebook

License

MIT-0

Last pushed

May 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/aws-samples/simplified-corrective-rag"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.