Addepto/contextcheck

MIT-licensed Framework for LLMs, RAGs, Chatbots testing. Configurable via YAML and integrable into CI pipelines for automated testing.

38
/ 100
Emerging

This tool helps AI product managers, quality assurance engineers, and developers ensure their Large Language Models (LLMs), RAG systems, and AI chatbots are reliable and perform as expected. You provide test scenarios with specific prompts or queries and define expected responses. The tool then automatically generates queries, runs tests to check for issues like incorrect answers or 'hallucinations', and outputs clear results, allowing for quick adjustments and improvements to your AI systems.

No commits in the last 6 months.

Use this if you are developing or managing AI applications and need a systematic way to test your LLMs, RAGs, or chatbots for accuracy, consistency, and reliability across different models and evolving requirements.

Not ideal if you are looking for a tool to train or fine-tune the core AI models themselves, rather than testing the performance of your application's prompts and outputs.

AI quality assurance chatbot testing LLM evaluation RAG system validation AI product management
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

91

Forks

11

Language

Python

License

MIT

Last pushed

Dec 11, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/Addepto/contextcheck"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.