uqlm and query_level_uncertainty
These are competitors: both implement uncertainty quantification methods to detect hallucinations in language models, targeting the same problem space with different technical approaches, so a user would typically adopt one or the other rather than both.
About uqlm
cvs-health/uqlm
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
This tool helps people who use large language models (LLMs) to detect when the LLM might be generating incorrect or fabricated information, known as "hallucinations." You provide text prompts to an LLM, and this tool analyzes the responses to give you a confidence score, indicating how likely the answer is to be accurate. This is useful for anyone relying on LLM outputs for critical tasks, such as content creators, researchers, or customer service managers.
About query_level_uncertainty
tigerchen52/query_level_uncertainty
query-level uncertainty in LLMs
This project helps operations engineers and developers managing AI applications to quickly assess how confident a large language model (LLM) is about a user's query, before generating an answer. It takes a user's question or prompt as input and outputs a confidence score, allowing for faster decision-making on whether to use additional tools like a RAG system or a more complex model. This is ideal for those who need to manage the cost and latency of LLM-powered systems.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work