jacobgil/confidenceinterval
The long missing library for python confidence intervals
This tool helps data scientists and machine learning engineers evaluate the reliability of their model performance metrics. You input your model's predictions and the true outcomes, and it calculates key metrics like F1-score, precision, recall, and ROC AUC, providing a lower and upper bound (confidence interval) for each. This helps you understand how much your metric might vary if you had different data, rather than just getting a single, potentially misleading, number.
146 stars. No commits in the last 6 months. Available on PyPI.
Use this if you need to confidently assess the performance of your machine learning models and understand the variability of your evaluation metrics due to sample size or data sensitivity.
Not ideal if you only need the raw metric scores without any statistical understanding of their uncertainty or if you are working with very small datasets where confidence interval methods might be less reliable.
Stars
146
Forks
16
Language
Python
License
MIT
Last pushed
May 24, 2024
Commits (30d)
0
Dependencies
6
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jacobgil/confidenceinterval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
EmuKit/emukit
A Python-based toolbox of various methods in decision making, uncertainty quantification and...
google/uncertainty-baselines
High-quality implementations of standard and SOTA methods on a variety of tasks.
nielstron/quantulum3
Library for unit extraction - fork of quantulum for python3
IBM/UQ360
Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you...
aamini/evidential-deep-learning
Learn fast, scalable, and calibrated measures of uncertainty using neural networks!