TorchUQ/torchuq

A library for uncertainty quantification based on PyTorch

35
/ 100
Emerging

When building predictive models, it's crucial for them to not only make predictions but also to express how confident they are in those predictions. This helps identify when a model 'doesn't know what it doesn't know.' This tool takes your existing model's predictions and associated labels, then evaluates, visualizes, and recalibrates the uncertainty in those predictions. It's designed for data scientists, machine learning engineers, and researchers working with models in fields like natural sciences, engineering, and trustworthy AI.

121 stars. No commits in the last 6 months.

Use this if you need to rigorously evaluate, calibrate, or visualize the uncertainty associated with your model's predictions to ensure they are trustworthy.

Not ideal if you are looking for a tool to build predictive models from scratch, as this library focuses specifically on quantifying and managing the uncertainty of existing model outputs.

predictive-modeling trustworthy-AI statistical-inference scientific-modeling risk-assessment
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

121

Forks

7

Language

Jupyter Notebook

License

MIT

Last pushed

Jan 10, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/TorchUQ/torchuq"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.