MAIF/eurybia
⚓ Eurybia monitors model drift over time and securizes model deployment with data validation
Helps Data Scientists and ML Engineers ensure their machine learning models remain accurate and reliable after deployment. It takes your production dataset and compares it against your original training data, generating a detailed HTML report. This report highlights changes in data patterns and model performance, ensuring the model continues to make valid predictions.
216 stars.
Use this if you need to monitor the health and performance of your machine learning models in production, detect when the real-world data starts to differ from your training data, or validate incoming data before it's fed to a deployed model.
Not ideal if you're looking for a low-code solution for general data exploration or visualization that isn't specifically focused on model drift or data validation.
Stars
216
Forks
26
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/MAIF/eurybia"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
WeBankFinTech/Prophecis
Prophecis is a one-stop cloud native machine learning platform.
fabriziosalmi/proxmox-lxc-autoscale-ml
Automatically scale the LXC containers resources on Proxmox hosts with AI
aws-samples/amazon-sagemaker-drift-detection
This sample demonstrates how to setup an Amazon SageMaker MLOps end-to-end pipeline for Drift detection
sustainable-computing-io/clever
Container Level Energy-efficient VPA Recommender
kennethleungty/End-to-End-AutoML-Insurance
An End-to-End Implementation of AutoML with H2O, MLflow, FastAPI, and Streamlit for Insurance Cross-Sell