aws-samples/Reducing-Hallucinations-in-LLM-Agents-with-a-Verified-Semantic-Cache

This repository contains sample code demonstrating how to implement a verified semantic cache using Amazon Bedrock Knowledge Bases to prevent hallucinations in Large Language Model (LLM) responses while improving latency and reducing costs.

21
/ 100
Experimental

This solution helps developers who are building applications powered by Large Language Models (LLMs) to ensure those applications provide accurate and consistent information. It takes user queries as input and, by intelligently reusing previously verified answers, delivers more reliable LLM responses while also speeding up the application and reducing operational costs. Developers implementing LLM-based chatbots, virtual assistants, or Q&A systems will find this valuable.

No commits in the last 6 months.

Use this if you are developing an LLM-powered application and need to prevent your AI from generating incorrect or 'hallucinated' information, while also improving response times and managing costs.

Not ideal if you are looking for a plug-and-play end-user application or do not have experience with AWS and deploying cloud infrastructure.

LLM-development AI-application-engineering chatbot-reliability knowledge-retrieval AI-cost-optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

12

Forks

Language

Jupyter Notebook

License

MIT-0

Last pushed

Apr 03, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/aws-samples/Reducing-Hallucinations-in-LLM-Agents-with-a-Verified-Semantic-Cache"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.