WenjianHuang93/h-Calibration

h-calibration: post-hoc calibration for deep learning classifier

28
/ 100
Experimental

This project helps machine learning engineers and researchers improve the trustworthiness of their deep learning classification models. It takes the raw output (logits and labels) from a pretrained classifier and processes them to produce more reliable, calibrated probability scores. This allows users to confidently interpret model predictions, especially in applications where accurate probability estimates are critical.

No commits in the last 6 months.

Use this if you need your deep learning classifier's predicted probabilities to accurately reflect the true likelihood of an event, rather than just its confidence ranking.

Not ideal if you are looking for a tool to train an entire classification model from scratch, as this focuses specifically on post-hoc calibration.

deep-learning model-calibration machine-learning-engineering predictive-modeling classification
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 4 / 25

How are scores calculated?

Stars

28

Forks

1

Language

Python

License

MIT

Last pushed

Sep 21, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/WenjianHuang93/h-Calibration"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.