Novartis/UNIQUE

A Python library for benchmarking uncertainty estimation and quantification methods for Machine Learning models predictions.

48
/ 100
Emerging

When you're developing machine learning models, especially in critical fields like drug discovery, it's not enough for a model to make a prediction; you need to understand how certain it is about that prediction. This tool helps you test and compare different ways of estimating uncertainty in your models. It takes your model's inputs and predictions, applies various uncertainty quantification methods, and then gives you evaluations and visualizations to show you which methods work best. This is for machine learning practitioners, data scientists, and researchers who need to ensure their models' predictions are reliable and trustworthy.

Use this if you need to rigorously evaluate and compare how well your machine learning models convey their uncertainty, ensuring you can trust their predictions in real-world applications.

Not ideal if you are looking for a tool to build or train machine learning models, as it focuses solely on evaluating uncertainty after a model has made its predictions.

drug-discovery machine-learning-evaluation predictive-modeling model-reliability data-science
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

42

Forks

7

Language

Python

License

BSD-3-Clause

Last pushed

Mar 09, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Novartis/UNIQUE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.