Kelpejol/llm-output-stability-gate

Pre-execution reliability gate using UQLM for LLM output stability

36
/ 100
Emerging

When you're generating code with AI models, this tool helps you check if the AI is truly confident in its suggestions. You provide a request, and it generates multiple code solutions, then compares them for consistency in logic, security, and edge case handling. Software developers and engineers can use this to get a 'confidence score' and detailed flags about where AI-generated code might be unreliable before they use it.

Use this if you need to quickly assess the reliability of AI-generated code for critical applications like security or production systems, catching inconsistencies that traditional linters or tests might miss.

Not ideal if you're only generating simple, non-critical code snippets where minor variations are acceptable, or if you prefer to manually review every line of AI-generated code without an automated pre-check.

AI-code-review software-development application-security code-quality developer-productivity
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 13 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

Python

License

MIT

Category

llm-api-gateways

Last pushed

Jan 21, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Kelpejol/llm-output-stability-gate"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.