Novartis/UNIQUE
A Python library for benchmarking uncertainty estimation and quantification methods for Machine Learning models predictions.
When you're developing machine learning models, especially in critical fields like drug discovery, it's not enough for a model to make a prediction; you need to understand how certain it is about that prediction. This tool helps you test and compare different ways of estimating uncertainty in your models. It takes your model's inputs and predictions, applies various uncertainty quantification methods, and then gives you evaluations and visualizations to show you which methods work best. This is for machine learning practitioners, data scientists, and researchers who need to ensure their models' predictions are reliable and trustworthy.
Use this if you need to rigorously evaluate and compare how well your machine learning models convey their uncertainty, ensuring you can trust their predictions in real-world applications.
Not ideal if you are looking for a tool to build or train machine learning models, as it focuses solely on evaluating uncertainty after a model has made its predictions.
Stars
42
Forks
7
Language
Python
License
BSD-3-Clause
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Novartis/UNIQUE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
EmuKit/emukit
A Python-based toolbox of various methods in decision making, uncertainty quantification and...
google/uncertainty-baselines
High-quality implementations of standard and SOTA methods on a variety of tasks.
nielstron/quantulum3
Library for unit extraction - fork of quantulum for python3
IBM/UQ360
Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you...
aamini/evidential-deep-learning
Learn fast, scalable, and calibrated measures of uncertainty using neural networks!