artefactory/woodtapper
WoodTapper — a Python toolbox for interpretable and explainable tree ensembles.
This tool helps data scientists and machine learning practitioners understand complex predictions made by tree-based models. It takes your trained scikit-learn tree ensemble model and a dataset, then outputs clear, simple rules explaining the model's logic or provides examples of similar data points that influenced a prediction. This is for anyone who needs to explain 'why' a model made a specific decision to stakeholders or for regulatory compliance.
Available on PyPI.
Use this if you need to turn opaque tree-based machine learning models into transparent, human-understandable explanations or extract actionable decision rules.
Not ideal if your models are not tree-based ensembles (like linear models or neural networks) or if you only need prediction accuracy without needing to explain the reasoning.
Stars
36
Forks
3
Language
Python
License
MIT
Last pushed
Mar 11, 2026
Commits (30d)
0
Dependencies
4
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/artefactory/woodtapper"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...