Affirm/splinator
Splinator: probabilistic calibration with regression splines
When you have a predictive model that outputs probabilities, this tool helps you check if those probabilities are trustworthy and then makes them more accurate. You input your model's probability predictions and the actual outcomes, and it outputs improved, more reliable probabilities. This is for anyone who uses probability predictions in their work, such as risk analysts, forecasters, or decision scientists, who needs to ensure their model's confidence scores are well-aligned with reality.
Use this if your models are making probability predictions (like 'there's an 80% chance of rain') and you need to ensure those probabilities are realistic and not over- or under-confident.
Not ideal if you are primarily interested in the raw classification accuracy of your model rather than the quality of its probability estimates.
Stars
24
Forks
4
Language
Python
License
BSD-3-Clause
Category
Last pushed
Jan 01, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Affirm/splinator"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
facebookincubator/MCGrad
MCGrad is a scalable and easy-to-use tool for multicalibration. It ensures your ML model...
dholzmueller/probmetrics
Post-hoc calibration methods and metrics for classification
gpleiss/temperature_scaling
A simple way to calibrate your neural network.
yfzhang114/Generalization-Causality
关于domain generalization,domain adaptation,causality,robutness,prompt,optimization,generative...
hollance/reliability-diagrams
Reliability diagrams visualize whether a classifier model needs calibration