google/uncertainty-baselines

High-quality implementations of standard and SOTA methods on a variety of tasks.

70
/ 100
Verified

This project offers standardized, high-quality implementations of methods for assessing and improving the reliability of machine learning models. It takes raw training data and model configurations, and outputs performance metrics like accuracy, calibration error, and negative log-likelihood. This tool is designed for machine learning researchers and practitioners who need to evaluate model robustness and uncertainty in a consistent way.

1,568 stars. Actively maintained with 1 commit in the last 30 days. Available on PyPI.

Use this if you are a machine learning researcher or practitioner who needs a solid starting point to experiment with and compare different methods for quantifying model uncertainty and improving robustness.

Not ideal if you are looking for a plug-and-play solution for immediate deployment without deep involvement in model architecture and training specifics.

machine-learning-research model-robustness uncertainty-quantification predictive-modeling model-evaluation
Maintenance 13 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 22 / 25

How are scores calculated?

Stars

1,568

Forks

216

Language

Python

License

Apache-2.0

Last pushed

Feb 02, 2026

Commits (30d)

1

Dependencies

5

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/google/uncertainty-baselines"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.