rikhuijzer/SIRUS.jl

Interpretable Machine Learning via Rule Extraction

37
/ 100
Emerging

This tool helps data analysts and domain experts understand why a machine learning model makes certain predictions, especially for classification tasks. It takes your dataset and produces a set of simple, human-readable "if-then" rules that explain the model's logic directly, rather than using a complex model with a separate explanation layer. This makes the entire decision-making process transparent and reliable.

Use this if you need to build a machine learning model for classification or regression where it's critical to understand and explain exactly how predictions are made, like in regulatory or high-stakes environments.

Not ideal if your primary goal is maximum predictive accuracy above all else, and interpretability is a secondary concern, as it prioritizes transparency over raw performance.

predictive-modeling model-auditing decision-explanation risk-assessment regulatory-compliance
No Package No Dependents
Maintenance 6 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

40

Forks

3

Language

Julia

License

MIT

Last pushed

Nov 22, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/rikhuijzer/SIRUS.jl"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.