Cohorte-ai/trustgate

Black-box AI reliability certification via self-consistency sampling and conformal calibration

36
/ 100
Emerging

This tool helps AI product managers, quality assurance engineers, and operations teams determine if an AI system is reliable enough to be deployed. You provide your AI model and a set of test questions, and it outputs a single, statistically guaranteed reliability level (e.g., 98.0%) that tells you how often the AI's top answer is correct. It helps you assess the production readiness of LLMs, AI agents, or RAG pipelines.

Use this if you need a formal, quantifiable guarantee of your AI system's performance before putting it into production, especially for critical applications where 'good enough' isn't acceptable.

Not ideal if you are looking for basic performance metrics like accuracy or F1 score on a labeled dataset, or if you don't need a statistical guarantee of your model's real-world reliability.

AI-product-management AI-quality-assurance model-validation AI-system-certification production-readiness
No Package No Dependents
Maintenance 13 / 25
Adoption 5 / 25
Maturity 11 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Python

License

Last pushed

Mar 28, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/Cohorte-ai/trustgate"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.