ExplainableML/BayesCap

(ECCV 2022) BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen Neural Networks

33
/ 100
Emerging

This helps scientists, engineers, and researchers working with existing computer vision models. It takes an already-trained image processing or analysis model and a small dataset, and outputs calibrated uncertainty estimates for that model's predictions. This allows practitioners to understand the reliability of their model's outputs without expensive retraining.

No commits in the last 6 months.

Use this if you need to understand the confidence or reliability of predictions from your deployed computer vision models (e.g., for image super-resolution, deblurring, inpainting, medical image translation, or autonomous driving depth estimation) without altering or retraining the original model.

Not ideal if you are developing new deep learning models from scratch and want to build uncertainty directly into the initial training process, rather than adding it to a frozen model.

computer-vision medical-imaging autonomous-driving image-processing model-reliability
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

50

Forks

4

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Dec 14, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ExplainableML/BayesCap"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.