uncertainty-baselines and awesome-uncertainty-deeplearning
About uncertainty-baselines
google/uncertainty-baselines
High-quality implementations of standard and SOTA methods on a variety of tasks.
This project offers standardized, high-quality implementations of methods for assessing and improving the reliability of machine learning models. It takes raw training data and model configurations, and outputs performance metrics like accuracy, calibration error, and negative log-likelihood. This tool is designed for machine learning researchers and practitioners who need to evaluate model robustness and uncertainty in a consistent way.
About awesome-uncertainty-deeplearning
ENSTA-U2IS-AI/awesome-uncertainty-deeplearning
This repository contains a collection of surveys, datasets, papers, and codes, for predictive uncertainty estimation in deep learning models.
This is a curated collection of resources that helps machine learning practitioners understand and implement methods for estimating how certain their deep learning models are about their predictions. It provides a comprehensive list of papers, code examples, datasets, and surveys on uncertainty quantification techniques. Data scientists and AI researchers can use this to research the field of uncertainty in deep learning, to choose appropriate techniques, and to apply them to their models.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work