TorchUQ/torchuq
A library for uncertainty quantification based on PyTorch
When building predictive models, it's crucial for them to not only make predictions but also to express how confident they are in those predictions. This helps identify when a model 'doesn't know what it doesn't know.' This tool takes your existing model's predictions and associated labels, then evaluates, visualizes, and recalibrates the uncertainty in those predictions. It's designed for data scientists, machine learning engineers, and researchers working with models in fields like natural sciences, engineering, and trustworthy AI.
121 stars. No commits in the last 6 months.
Use this if you need to rigorously evaluate, calibrate, or visualize the uncertainty associated with your model's predictions to ensure they are trustworthy.
Not ideal if you are looking for a tool to build predictive models from scratch, as this library focuses specifically on quantifying and managing the uncertainty of existing model outputs.
Stars
121
Forks
7
Language
Jupyter Notebook
License
MIT
Last pushed
Jan 10, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/TorchUQ/torchuq"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
EmuKit/emukit
A Python-based toolbox of various methods in decision making, uncertainty quantification and...
google/uncertainty-baselines
High-quality implementations of standard and SOTA methods on a variety of tasks.
nielstron/quantulum3
Library for unit extraction - fork of quantulum for python3
IBM/UQ360
Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you...
aamini/evidential-deep-learning
Learn fast, scalable, and calibrated measures of uncertainty using neural networks!