dholzmueller/probmetrics
Post-hoc calibration methods and metrics for classification
When you have a classification model that outputs probabilities, this tool helps you improve the reliability of those probabilities. It takes your model's predicted probabilities and the actual outcomes, then refines them so they more accurately reflect the true likelihood of events. This is for data scientists, machine learning engineers, and researchers who need highly trustworthy probability predictions for their applications.
Used by 1 other package. Available on PyPI.
Use this if your classification model's probability predictions are not aligning well with observed frequencies, and you need to ensure they are well-calibrated for critical decision-making.
Not ideal if you are looking for new model architectures or basic classification metrics, as this focuses on improving existing model output and advanced evaluation of probability quality.
Stars
53
Forks
5
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 02, 2026
Commits (30d)
0
Dependencies
5
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/dholzmueller/probmetrics"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
facebookincubator/MCGrad
MCGrad is a scalable and easy-to-use tool for multicalibration. It ensures your ML model...
gpleiss/temperature_scaling
A simple way to calibrate your neural network.
yfzhang114/Generalization-Causality
关于domain generalization,domain adaptation,causality,robutness,prompt,optimization,generative...
hollance/reliability-diagrams
Reliability diagrams visualize whether a classifier model needs calibration
Affirm/splinator
Splinator: probabilistic calibration with regression splines