cambridge-mlg/CLUE
Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"
This project helps machine learning practitioners understand why their models are uncertain about predictions. It takes in a trained Bayesian Neural Network (BNN) and a specific input, then identifies the smallest changes to that input that would make the BNN more confident. This allows data scientists or ML engineers to pinpoint which features or patterns in the data are contributing most to the model's lack of confidence.
No commits in the last 6 months.
Use this if you need to explain why your machine learning model is uncertain about a particular prediction, especially for critical applications where trust in model output is paramount.
Not ideal if you are solely interested in improving model accuracy rather than understanding its uncertainty, or if your models are not differentiable probabilistic models like Bayesian Neural Networks.
Stars
35
Forks
6
Language
Python
License
MIT
Last pushed
Apr 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/cambridge-mlg/CLUE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...