Addepto/contextcheck
MIT-licensed Framework for LLMs, RAGs, Chatbots testing. Configurable via YAML and integrable into CI pipelines for automated testing.
This tool helps AI product managers, quality assurance engineers, and developers ensure their Large Language Models (LLMs), RAG systems, and AI chatbots are reliable and perform as expected. You provide test scenarios with specific prompts or queries and define expected responses. The tool then automatically generates queries, runs tests to check for issues like incorrect answers or 'hallucinations', and outputs clear results, allowing for quick adjustments and improvements to your AI systems.
No commits in the last 6 months.
Use this if you are developing or managing AI applications and need a systematic way to test your LLMs, RAGs, or chatbots for accuracy, consistency, and reliability across different models and evolving requirements.
Not ideal if you are looking for a tool to train or fine-tune the core AI models themselves, rather than testing the performance of your application's prompts and outputs.
Stars
91
Forks
11
Language
Python
License
MIT
Category
Last pushed
Dec 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/Addepto/contextcheck"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
modelscope/evalscope
A streamlined and customizable framework for efficient large model (LLM, VLM, AIGC) evaluation...
izam-mohammed/ragrank
🎯 Your free LLM evaluation toolkit helps you assess the accuracy of facts, how well it...
Kareem-Rashed/rubric-eval
Independent framework to test, benchmark, and evaluate LLMs & AI agents locally.
justplus/llm-eval
大语言模型评估平台,支持多种评估基准、自定义数据集和性能测试。支持基于自定义数据集的RAG评估。
relari-ai/continuous-eval
Data-Driven Evaluation for LLM-Powered Applications