uncertainty-baselines and equine

uncertainty-baselines
70
Verified
equine
46
Emerging
Maintenance 13/25
Adoption 10/25
Maturity 25/25
Community 22/25
Maintenance 10/25
Adoption 6/25
Maturity 25/25
Community 5/25
Stars: 1,568
Forks: 216
Downloads:
Commits (30d): 1
Language: Python
License: Apache-2.0
Stars: 15
Forks: 1
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
No risk flags

About uncertainty-baselines

google/uncertainty-baselines

High-quality implementations of standard and SOTA methods on a variety of tasks.

This project offers standardized, high-quality implementations of methods for assessing and improving the reliability of machine learning models. It takes raw training data and model configurations, and outputs performance metrics like accuracy, calibration error, and negative log-likelihood. This tool is designed for machine learning researchers and practitioners who need to evaluate model robustness and uncertainty in a consistent way.

machine-learning-research model-robustness uncertainty-quantification predictive-modeling model-evaluation

About equine

mit-ll-responsible-ai/equine

Establishing Quantified Uncertainty in Neural Networks

When working with machine learning models that categorize or label data, it's crucial to understand not just what the model predicts, but also how confident it is and if the data even fits within what it was trained on. This tool takes your existing deep neural network and gives you enhanced predictions, including calibrated probabilities for each label and a score indicating if the input data is truly similar to the data the model learned from. Data scientists and machine learning engineers who need to build more trustworthy and transparent AI systems will find this invaluable.

machine-learning-engineering model-trustworthiness AI-explainability predictive-analytics risk-assessment

Scores updated daily from GitHub, PyPI, and npm data. How scores work