Yu-Group/imodels-experiments
Experiments with experimental rule-based models to go along with imodels.
This project helps machine learning practitioners benchmark and compare different interpretable, rule-based models for classification and regression tasks. You provide your datasets and your custom or existing supervised machine learning models, and the project outputs performance comparisons across models and datasets. This is for data scientists, researchers, or anyone building and evaluating interpretable predictive models in fields like healthcare, finance, or marketing.
Use this if you need to rigorously evaluate and compare the predictive performance and interpretability of various rule-based models on your own datasets to find the best fit for your application.
Not ideal if you are looking to benchmark unsupervised learning models or if your primary goal is not to compare different model architectures or configurations.
Stars
18
Forks
6
Language
Jupyter Notebook
License
—
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Yu-Group/imodels-experiments"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...