mdca-loss/MDCA-Calibration
[CVPR 2022] Official code for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibration"
This project helps improve the trustworthiness of AI decisions in critical applications like medical diagnosis or autonomous driving. It takes your existing deep learning model training setup and, by adding a special 'MDCA' loss function during training, produces a model that provides more reliable confidence scores alongside its predictions. This is for AI developers or researchers building models for safety-critical systems.
No commits in the last 6 months.
Use this if you need your deep neural networks to not only make accurate predictions but also to accurately reflect their certainty, reducing overconfident mistakes.
Not ideal if you only care about prediction accuracy and not the reliability of your model's confidence scores, or if you prefer a 'post-hoc' calibration approach after your model is fully trained.
Stars
33
Forks
5
Language
Python
License
MIT
Category
Last pushed
Nov 09, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mdca-loss/MDCA-Calibration"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
facebookincubator/MCGrad
MCGrad is a scalable and easy-to-use tool for multicalibration. It ensures your ML model...
dholzmueller/probmetrics
Post-hoc calibration methods and metrics for classification
gpleiss/temperature_scaling
A simple way to calibrate your neural network.
yfzhang114/Generalization-Causality
关于domain generalization,domain adaptation,causality,robutness,prompt,optimization,generative...
Affirm/splinator
Splinator: probabilistic calibration with regression splines