rikhuijzer/SIRUS.jl
Interpretable Machine Learning via Rule Extraction
This tool helps data analysts and domain experts understand why a machine learning model makes certain predictions, especially for classification tasks. It takes your dataset and produces a set of simple, human-readable "if-then" rules that explain the model's logic directly, rather than using a complex model with a separate explanation layer. This makes the entire decision-making process transparent and reliable.
Use this if you need to build a machine learning model for classification or regression where it's critical to understand and explain exactly how predictions are made, like in regulatory or high-stakes environments.
Not ideal if your primary goal is maximum predictive accuracy above all else, and interpretability is a secondary concern, as it prioritizes transparency over raw performance.
Stars
40
Forks
3
Language
Julia
License
MIT
Last pushed
Nov 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/rikhuijzer/SIRUS.jl"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...