xRiskLab/pearsonify

Lightweight Python package for generating classification intervals in binary classification tasks using Pearson residuals and conformal prediction

48
/ 100
Emerging

When you have a system that predicts a 'yes' or 'no' outcome (like customer churn or disease presence), this tool helps you understand how confident those predictions are. You input your existing prediction model and its probability scores, and it outputs a range of probabilities for each prediction. Data scientists, machine learning engineers, and analysts who build and evaluate binary classification models would find this useful.

Available on PyPI.

Use this if you need to add statistically sound, intuitive confidence intervals to your binary classification model's predictions without making strong assumptions about your data.

Not ideal if you are looking for a tool for multi-class classification or if you do not have a pre-trained model that outputs probability estimates.

predictive analytics model evaluation risk assessment statistical modeling machine learning engineering
Maintenance 10 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 7 / 25

How are scores calculated?

Stars

23

Forks

2

Language

Python

License

MIT

Last pushed

Feb 20, 2026

Commits (30d)

0

Dependencies

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/xRiskLab/pearsonify"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.