GustyCube/ERR-EVAL

Benchmark for evaluating AI epistemic reliability - testing how well LLMs handle uncertainty, avoid hallucinations, and acknowledge what they don't know.

32
/ 100
Emerging

This benchmark helps you evaluate how reliably your AI models handle uncertainty and incomplete information. It takes your AI model as input and outputs a score across five critical areas, showing how well it detects ambiguity, avoids making things up, and acknowledges what it doesn't know. AI product managers, researchers, and anyone deploying AI systems can use this to ensure their models are trustworthy and safe.

Use this if you need to rigorously test whether your AI model can recognize and respond appropriately to incomplete, noisy, or inconsistent data without 'hallucinating' or being overly confident.

Not ideal if you are looking to improve your AI's performance on standard factual recall or task execution where all necessary information is explicitly provided.

AI-evaluation AI-safety model-benchmarking AI-reliability responsible-AI
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 13 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

Python

License

MIT

Last pushed

Jan 02, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/GustyCube/ERR-EVAL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.