csinva/disentangled-attribution-curves
Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"
This tool helps data scientists and machine learning engineers understand why a Random Forest or Boosted Tree model makes certain predictions. You provide your trained tree-based model and the data it was trained on. It then outputs 'disentangled attribution curves,' which are clear visual explanations showing how individual features or pairs of features influence the model's output.
No commits in the last 6 months.
Use this if you need to interpret the predictions of your scikit-learn compatible Random Forest or Boosted Tree models, especially to understand how different features contribute to the outcome.
Not ideal if you are working with other types of machine learning models like neural networks, or if you don't need detailed, feature-level interpretability.
Stars
28
Forks
4
Language
Python
License
MIT
Last pushed
Feb 11, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/csinva/disentangled-attribution-curves"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...