AlaaLab/deep-learning-uncertainty

Literature survey, paper reviews, experimental setups and a collection of implementations for baselines methods for predictive uncertainty estimation in deep learning models.

37
/ 100
Emerging

This resource provides a collection of research papers and practical code examples for understanding and implementing methods to estimate predictive uncertainty in deep learning models. It helps data scientists and machine learning engineers assess how confident their deep learning model's predictions are. The input is trained deep learning models and the output is a quantification of prediction uncertainty.

640 stars. No commits in the last 6 months.

Use this if you are a data scientist or machine learning engineer who needs to understand and implement techniques to measure the reliability and confidence of predictions from deep learning models.

Not ideal if you are a non-technical user looking for a ready-to-use tool without delving into the underlying statistical and machine learning concepts.

Machine Learning Deep Learning Predictive Modeling Model Evaluation Risk Assessment
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 19 / 25

How are scores calculated?

Stars

640

Forks

78

Language

Jupyter Notebook

License

Last pushed

Aug 01, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AlaaLab/deep-learning-uncertainty"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.