jacobgil/confidenceinterval

The long missing library for python confidence intervals

48
/ 100
Emerging

This tool helps data scientists and machine learning engineers evaluate the reliability of their model performance metrics. You input your model's predictions and the true outcomes, and it calculates key metrics like F1-score, precision, recall, and ROC AUC, providing a lower and upper bound (confidence interval) for each. This helps you understand how much your metric might vary if you had different data, rather than just getting a single, potentially misleading, number.

146 stars. No commits in the last 6 months. Available on PyPI.

Use this if you need to confidently assess the performance of your machine learning models and understand the variability of your evaluation metrics due to sample size or data sensitivity.

Not ideal if you only need the raw metric scores without any statistical understanding of their uncertainty or if you are working with very small datasets where confidence interval methods might be less reliable.

Machine Learning Evaluation Model Validation Statistical Analysis Data Science Performance Measurement
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 13 / 25

How are scores calculated?

Stars

146

Forks

16

Language

Python

License

MIT

Last pushed

May 24, 2024

Commits (30d)

0

Dependencies

6

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jacobgil/confidenceinterval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.