JuliaTrustworthyAI/ConformalPrediction.jl

Predictive Uncertainty Quantification through Conformal Prediction for Machine Learning models trained in MLJ.

46
/ 100
Emerging

When you're working with machine learning models and need to understand the reliability of their predictions, this tool helps you quantify that uncertainty. It takes your existing Julia machine learning model and data, and instead of just giving you a single prediction, it provides a range (prediction interval) that is guaranteed to contain the true outcome with a high probability. This is ideal for scientists, engineers, or analysts who rely on model predictions for critical decisions.

146 stars.

Use this if you need statistically rigorous, distribution-free guarantees on the uncertainty of your machine learning model's predictions in Julia.

Not ideal if you are looking for an explanation of *why* the model made a certain prediction, as this focuses solely on the reliability of the prediction itself.

predictive-modeling risk-assessment decision-support data-analysis model-validation
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

146

Forks

10

Language

Julia

License

MIT

Last pushed

Feb 17, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/JuliaTrustworthyAI/ConformalPrediction.jl"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.