cvs-health/uqlm

UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection

73
/ 100
Verified

This tool helps people who use large language models (LLMs) to detect when the LLM might be generating incorrect or fabricated information, known as "hallucinations." You provide text prompts to an LLM, and this tool analyzes the responses to give you a confidence score, indicating how likely the answer is to be accurate. This is useful for anyone relying on LLM outputs for critical tasks, such as content creators, researchers, or customer service managers.

1,121 stars. Actively maintained with 33 commits in the last 30 days. Available on PyPI.

Use this if you need to quickly assess the trustworthiness of responses generated by large language models and want to reduce the risk of acting on false information.

Not ideal if you primarily need to improve the underlying accuracy of your LLM rather than just detecting potential errors in its outputs.

LLM-reliability content-verification AI-assurance information-quality response-evaluation
Maintenance 20 / 25
Adoption 10 / 25
Maturity 24 / 25
Community 19 / 25

How are scores calculated?

Stars

1,121

Forks

116

Language

Python

License

Apache-2.0

Last pushed

Mar 12, 2026

Commits (30d)

33

Dependencies

14

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/cvs-health/uqlm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.