lorenzofamiglini/CalFram

Calibration Framework for Machine Learning and Deep Learning

37
/ 100
Emerging

This framework helps you thoroughly understand how trustworthy your machine learning classification models are. You provide your model's predictions, the actual outcomes, and the predicted probabilities for each class. In return, you get a detailed assessment of your model's calibration, showing where it might be overconfident or underconfident. Data scientists, machine learning engineers, and researchers can use this to build more reliable models.

No commits in the last 6 months.

Use this if you need to go beyond simple accuracy metrics and deeply understand whether your classification model's predicted probabilities truly reflect the likelihood of an event.

Not ideal if you are working with regression models or don't need detailed insights into model confidence for your classification tasks.

machine-learning-assessment model-reliability classification-performance data-science AI-ethics
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

16

Forks

3

Language

Python

License

MIT

Last pushed

Jul 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/lorenzofamiglini/CalFram"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.