WenjianHuang93/h-Calibration
h-calibration: post-hoc calibration for deep learning classifier
This project helps machine learning engineers and researchers improve the trustworthiness of their deep learning classification models. It takes the raw output (logits and labels) from a pretrained classifier and processes them to produce more reliable, calibrated probability scores. This allows users to confidently interpret model predictions, especially in applications where accurate probability estimates are critical.
No commits in the last 6 months.
Use this if you need your deep learning classifier's predicted probabilities to accurately reflect the true likelihood of an event, rather than just its confidence ranking.
Not ideal if you are looking for a tool to train an entire classification model from scratch, as this focuses specifically on post-hoc calibration.
Stars
28
Forks
1
Language
Python
License
MIT
Category
Last pushed
Sep 21, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/WenjianHuang93/h-Calibration"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
facebookincubator/MCGrad
MCGrad is a scalable and easy-to-use tool for multicalibration. It ensures your ML model...
dholzmueller/probmetrics
Post-hoc calibration methods and metrics for classification
gpleiss/temperature_scaling
A simple way to calibrate your neural network.
yfzhang114/Generalization-Causality
关于domain generalization,domain adaptation,causality,robutness,prompt,optimization,generative...
hollance/reliability-diagrams
Reliability diagrams visualize whether a classifier model needs calibration