hollance/reliability-diagrams

Reliability diagrams visualize whether a classifier model needs calibration

41
/ 100
Emerging

This tool helps machine learning engineers and data scientists assess if their classification models are truly representing their certainty. It takes your model's predictions (true labels, predicted labels, and confidence scores) and visualizes how well its confidence aligns with its actual accuracy. The output is a "reliability diagram" that clearly shows whether your model is overconfident, underconfident, or well-calibrated.

167 stars. No commits in the last 6 months.

Use this if you are building or deploying a classification model and need to ensure its predicted confidence scores are trustworthy and accurately reflect its performance.

Not ideal if you need a tool to perform model calibration or improve accuracy, as this project focuses solely on diagnostic visualization.

machine-learning-operations model-evaluation classification-models predictive-analytics data-science
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

167

Forks

19

Language

Jupyter Notebook

License

MIT

Last pushed

Feb 11, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/hollance/reliability-diagrams"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.