linkedin/TE2Rules
Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
When you have a machine learning model, like XGBoost or Random Forest, making important predictions (e.g., in finance or healthcare), it can be hard to understand *why* it made a specific decision. This project helps you get clear, human-readable rules that explain what conditions lead the model to classify something a certain way. It takes your trained model and the data it learned from, and outputs a concise list of 'if-then' rules, which data scientists or domain experts can use to understand and validate the model's logic.
No commits in the last 6 months. Available on PyPI.
Use this if you need to understand the underlying logic of your Tree Ensemble models (like XGBoost or Random Forest) for binary classification tasks, especially in high-stakes domains where transparency and trust are crucial.
Not ideal if your model is not a Tree Ensemble (e.g., a neural network) or if you are working on a regression task rather than binary classification.
Stars
63
Forks
7
Language
Python
License
—
Last pushed
Apr 22, 2024
Commits (30d)
0
Dependencies
6
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/linkedin/TE2Rules"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...