EricssonResearch/illia

Framework agnostic Bayesian Neural Network library.

42
/ 100
Emerging

This library helps deep learning practitioners build models that not only make predictions but also provide a measure of confidence or uncertainty about those predictions. It takes deep learning model architectures, such as convolutional layers, and modifies them to quantify the reliability of their outputs. This is useful for machine learning engineers, data scientists, and researchers working on critical applications where understanding prediction certainty is as important as the prediction itself.

Use this if you are a deep learning engineer or researcher needing to incorporate uncertainty quantification into your models, especially in fields like telecommunications or medicine where prediction reliability is paramount.

Not ideal if you are a business user or practitioner without a strong background in deep learning model development and Python programming.

deep learning uncertainty quantification machine learning engineering predictive modeling model reliability
No Package No Dependents
Maintenance 13 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

7

Forks

1

Language

Python

License

MIT

Last pushed

Mar 16, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/EricssonResearch/illia"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.