EricssonResearch/illia
Framework agnostic Bayesian Neural Network library.
This library helps deep learning practitioners build models that not only make predictions but also provide a measure of confidence or uncertainty about those predictions. It takes deep learning model architectures, such as convolutional layers, and modifies them to quantify the reliability of their outputs. This is useful for machine learning engineers, data scientists, and researchers working on critical applications where understanding prediction certainty is as important as the prediction itself.
Use this if you are a deep learning engineer or researcher needing to incorporate uncertainty quantification into your models, especially in fields like telecommunications or medicine where prediction reliability is paramount.
Not ideal if you are a business user or practitioner without a strong background in deep learning model development and Python programming.
Stars
7
Forks
1
Language
Python
License
MIT
Category
Last pushed
Mar 16, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/EricssonResearch/illia"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
sbi-dev/sbi
sbi is a Python package for simulation-based inference, designed to meet the needs of both...
SMTorg/smt
Surrogate Modeling Toolbox
reservoirpy/reservoirpy
A simple and flexible code for Reservoir Computing architectures like Echo State Networks
GPflow/GPflow
Gaussian processes in TensorFlow
dswah/pyGAM
[CONTRIBUTORS WELCOME] Generalized Additive Models in Python