aws-samples/genai-system-evaluation
A set of examples demonstrating how to evaluate Generative AI augmented systems using traditional information retrieval and LLM-As-A-Judge validation techniques
This project helps evaluate how well your Generative AI applications, especially those using Retrieval-Augmented Generation (RAG), are performing. It takes in your AI model outputs and validation datasets, then provides scores and insights into the quality of responses. This is for AI developers, machine learning engineers, and data scientists who build and refine AI systems.
Use this if you are building an AI application and need to systematically test and improve its accuracy, relevance, and overall effectiveness before deployment.
Not ideal if you are looking for a plug-and-play solution for end-user AI evaluation without needing to delve into code or specific model configurations.
Stars
11
Forks
2
Language
Jupyter Notebook
License
MIT-0
Category
Last pushed
Oct 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/aws-samples/genai-system-evaluation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
GoogleCloudPlatform/vertex-ai-samples
Notebooks, code samples, sample apps, and other resources that demonstrate how to use, develop...
neo4j-partners/hands-on-lab-neo4j-and-google
Hands on Lab for Neo4j and Google
lynnlangit/learning-cloud
Courses, sample code, articles & screencasts - AWS, Azure, & GCP
GoogleCloudPlatform/applied-ai-engineering-samples
This repository compiles code samples and notebooks demonstrating how to use Generative AI on...
streamlit/30DaysOfAI
30 Days of AI