AlexanderVNikitin/kernel-language-entropy
Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)
This tool helps AI researchers and practitioners evaluate how confident a large language model (LLM) is about its generated responses. It takes an LLM's output and determines a fine-grained uncertainty score by analyzing the semantic similarities in its predictions. Researchers building or deploying LLMs would use this to understand and improve model reliability.
No commits in the last 6 months.
Use this if you are developing or evaluating large language models and need to quantify their uncertainty in a more detailed way than traditional methods.
Not ideal if you are looking for a simple, out-of-the-box solution without access to GPU hardware or experience with Python environments.
Stars
36
Forks
6
Language
Python
License
BSD-3-Clause-Clear
Category
Last pushed
Dec 17, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AlexanderVNikitin/kernel-language-entropy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
cvs-health/uqlm
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM...
PRIME-RL/TTRL
[NeurIPS 2025] TTRL: Test-Time Reinforcement Learning
sapientinc/HRM
Hierarchical Reasoning Model Official Release
tigerchen52/query_level_uncertainty
query-level uncertainty in LLMs
reasoning-survey/Awesome-Reasoning-Foundation-Models
✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models