orientino/dum-components

Code for "Training, Architecture, and Prior for Deterministic Uncertainty Methods" ICLR 2023 Workshop on Trustworthy ML

27
/ 100
Experimental

This project helps machine learning engineers and researchers build more reliable AI models. It provides methods and code to create models that not only make predictions but also estimate how confident they are in those predictions. This is particularly useful when dealing with new, unexpected data, allowing the model to flag when it's operating outside its comfort zone.

No commits in the last 6 months.

Use this if you are a machine learning practitioner developing models where understanding the certainty of predictions is critical, especially for identifying unusual or out-of-distribution data.

Not ideal if you are looking for a plug-and-play solution for simple prediction tasks without a strong need for uncertainty quantification or robust out-of-distribution detection.

machine-learning-engineering model-reliability uncertainty-quantification out-of-distribution-detection AI-safety
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

12

Forks

1

Language

Python

License

MIT

Last pushed

Jun 15, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/orientino/dum-components"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.