cambridge-mlg/CLUE

Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"

38
/ 100
Emerging

This project helps machine learning practitioners understand why their models are uncertain about predictions. It takes in a trained Bayesian Neural Network (BNN) and a specific input, then identifies the smallest changes to that input that would make the BNN more confident. This allows data scientists or ML engineers to pinpoint which features or patterns in the data are contributing most to the model's lack of confidence.

No commits in the last 6 months.

Use this if you need to explain why your machine learning model is uncertain about a particular prediction, especially for critical applications where trust in model output is paramount.

Not ideal if you are solely interested in improving model accuracy rather than understanding its uncertainty, or if your models are not differentiable probabilistic models like Bayesian Neural Networks.

model-interpretability machine-learning-auditing risk-assessment predictive-modeling data-science-workflow
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

35

Forks

6

Language

Python

License

MIT

Last pushed

Apr 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/cambridge-mlg/CLUE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.