aclai-lab/Sole.jl

Sole.jl – Long live transparent modeling!

33
/ 100
Emerging

This project helps data scientists, researchers, or anyone building predictive models to understand why their models make certain decisions. It takes existing machine learning models, particularly decision trees, and converts them into transparent, human-readable logical rules. The output allows you to inspect, verify, and even manually refine the model's 'thought process'.

Use this if you need to explain the reasoning behind your machine learning model's predictions, ensure ethical compliance, or gain new insights from the model's learned knowledge.

Not ideal if your primary goal is simply to achieve high prediction accuracy without needing to understand or interpret the model's internal logic.

interpretable-AI model-transparency decision-science knowledge-extraction ethical-AI
No Package No Dependents
Maintenance 6 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 3 / 25

How are scores calculated?

Stars

48

Forks

1

Language

Julia

License

MIT

Last pushed

Jan 07, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/aclai-lab/Sole.jl"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.