dholzmueller/probmetrics

Post-hoc calibration methods and metrics for classification

54
/ 100
Established

When you have a classification model that outputs probabilities, this tool helps you improve the reliability of those probabilities. It takes your model's predicted probabilities and the actual outcomes, then refines them so they more accurately reflect the true likelihood of events. This is for data scientists, machine learning engineers, and researchers who need highly trustworthy probability predictions for their applications.

Used by 1 other package. Available on PyPI.

Use this if your classification model's probability predictions are not aligning well with observed frequencies, and you need to ensure they are well-calibrated for critical decision-making.

Not ideal if you are looking for new model architectures or basic classification metrics, as this focuses on improving existing model output and advanced evaluation of probability quality.

predictive-modeling risk-assessment fraud-detection medical-diagnosis forecasting
Maintenance 10 / 25
Adoption 9 / 25
Maturity 25 / 25
Community 10 / 25

How are scores calculated?

Stars

53

Forks

5

Language

Python

License

Apache-2.0

Last pushed

Mar 02, 2026

Commits (30d)

0

Dependencies

5

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/dholzmueller/probmetrics"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.