csinva/disentangled-attribution-curves

Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"

35
/ 100
Emerging

This tool helps data scientists and machine learning engineers understand why a Random Forest or Boosted Tree model makes certain predictions. You provide your trained tree-based model and the data it was trained on. It then outputs 'disentangled attribution curves,' which are clear visual explanations showing how individual features or pairs of features influence the model's output.

No commits in the last 6 months.

Use this if you need to interpret the predictions of your scikit-learn compatible Random Forest or Boosted Tree models, especially to understand how different features contribute to the outcome.

Not ideal if you are working with other types of machine learning models like neural networks, or if you don't need detailed, feature-level interpretability.

machine-learning-interpretability predictive-modeling feature-importance model-debugging data-science
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

28

Forks

4

Language

Python

License

MIT

Last pushed

Feb 11, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/csinva/disentangled-attribution-curves"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.