MattYoon/reasoning-models-confidence

[NeurIPS 2025] Reasoning Models Better Express Their Confidence"

19
/ 100
Experimental

This project helps evaluate how well large language models (LLMs) can express their confidence in their answers, especially when using a reasoning process. It takes outputs from different LLMs (reasoning and non-reasoning) on question-answering tasks and calculates metrics like ECE, Brier Score, and AUROC. The primary users are researchers and practitioners who develop or deploy LLMs and need to understand their reliability.

Use this if you are developing or evaluating large language models and need to quantitatively assess how accurately they express their confidence in their generated answers.

Not ideal if you are looking for a plug-and-play solution to improve an existing LLM's confidence expression without performing in-depth analysis or running experiments.

AI-evaluation LLM-benchmarking model-reliability natural-language-processing machine-learning-research
No License No Package No Dependents
Maintenance 6 / 25
Adoption 6 / 25
Maturity 7 / 25
Community 0 / 25

How are scores calculated?

Stars

22

Forks

Language

Python

License

Last pushed

Nov 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/MattYoon/reasoning-models-confidence"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.