MAIF/eurybia

⚓ Eurybia monitors model drift over time and securizes model deployment with data validation

52
/ 100
Established

Helps Data Scientists and ML Engineers ensure their machine learning models remain accurate and reliable after deployment. It takes your production dataset and compares it against your original training data, generating a detailed HTML report. This report highlights changes in data patterns and model performance, ensuring the model continues to make valid predictions.

216 stars.

Use this if you need to monitor the health and performance of your machine learning models in production, detect when the real-world data starts to differ from your training data, or validate incoming data before it's fed to a deployed model.

Not ideal if you're looking for a low-code solution for general data exploration or visualization that isn't specifically focused on model drift or data validation.

machine-learning-operations model-monitoring data-quality predictive-analytics AI-governance
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

216

Forks

26

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mlops/MAIF/eurybia"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.