MattYoon/reasoning-models-confidence
[NeurIPS 2025] Reasoning Models Better Express Their Confidence"
This project helps evaluate how well large language models (LLMs) can express their confidence in their answers, especially when using a reasoning process. It takes outputs from different LLMs (reasoning and non-reasoning) on question-answering tasks and calculates metrics like ECE, Brier Score, and AUROC. The primary users are researchers and practitioners who develop or deploy LLMs and need to understand their reliability.
Use this if you are developing or evaluating large language models and need to quantitatively assess how accurately they express their confidence in their generated answers.
Not ideal if you are looking for a plug-and-play solution to improve an existing LLM's confidence expression without performing in-depth analysis or running experiments.
Stars
22
Forks
—
Language
Python
License
—
Category
Last pushed
Nov 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/MattYoon/reasoning-models-confidence"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
n-waves/multifit
The code to reproduce results from paper "MultiFiT: Efficient Multi-lingual Language Model...
princeton-nlp/SimCSE
[EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821
yxuansu/SimCTG
[NeurIPS'22 Spotlight] A Contrastive Framework for Neural Text Generation
alibaba-edu/simple-effective-text-matching
Source code of the ACL2019 paper "Simple and Effective Text Matching with Richer Alignment Features".
Shark-NLP/OpenICL
OpenICL is an open-source framework to facilitate research, development, and prototyping of...