hollance/reliability-diagrams
Reliability diagrams visualize whether a classifier model needs calibration
This tool helps machine learning engineers and data scientists assess if their classification models are truly representing their certainty. It takes your model's predictions (true labels, predicted labels, and confidence scores) and visualizes how well its confidence aligns with its actual accuracy. The output is a "reliability diagram" that clearly shows whether your model is overconfident, underconfident, or well-calibrated.
167 stars. No commits in the last 6 months.
Use this if you are building or deploying a classification model and need to ensure its predicted confidence scores are trustworthy and accurately reflect its performance.
Not ideal if you need a tool to perform model calibration or improve accuracy, as this project focuses solely on diagnostic visualization.
Stars
167
Forks
19
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Feb 11, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/hollance/reliability-diagrams"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
facebookincubator/MCGrad
MCGrad is a scalable and easy-to-use tool for multicalibration. It ensures your ML model...
dholzmueller/probmetrics
Post-hoc calibration methods and metrics for classification
gpleiss/temperature_scaling
A simple way to calibrate your neural network.
yfzhang114/Generalization-Causality
关于domain generalization,domain adaptation,causality,robutness,prompt,optimization,generative...
Affirm/splinator
Splinator: probabilistic calibration with regression splines