laura-rieger/deep-explanation-penalization

Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584

39
/ 100
Emerging

This project helps machine learning engineers and researchers improve the reliability and accuracy of their neural networks. It takes a trained neural network and prior knowledge about what specific parts of the input data should or shouldn't influence predictions. By applying a penalty during training, it produces a more robust neural network that aligns better with known principles and provides more trustworthy explanations for its decisions.

128 stars. No commits in the last 6 months.

Use this if you are developing neural networks and want to prevent them from learning spurious correlations in your training data, such as irrelevant image patches in medical diagnoses or gendered words in text classification.

Not ideal if you are a business user looking for a no-code solution to interpret existing models, as this tool requires familiarity with model training and code implementation.

machine-learning-engineering model-robustness AI-explainability bias-mitigation medical-image-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

128

Forks

14

Language

Jupyter Notebook

License

MIT

Last pushed

Mar 22, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/laura-rieger/deep-explanation-penalization"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.