GokuMohandas/monitoring-ml
Learn how to monitor ML systems to identify and mitigate sources of drift before model performance decay.
This project helps machine learning practitioners maintain the accuracy and reliability of their deployed AI models. It takes in real-time model performance metrics and input/output data, and helps you identify when your model's predictive power is starting to degrade. The primary user would be a machine learning engineer, MLOps specialist, or data scientist responsible for managing models in production.
No commits in the last 6 months.
Use this if you have machine learning models actively deployed and need to proactively detect when their performance is slipping due to changes in data or underlying relationships.
Not ideal if you are still in the model development or training phase and haven't deployed any models to a live environment yet.
Stars
99
Forks
19
Language
Jupyter Notebook
License
—
Category
Last pushed
Sep 12, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/GokuMohandas/monitoring-ml"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MAIF/eurybia
⚓ Eurybia monitors model drift over time and securizes model deployment with data validation
WeBankFinTech/Prophecis
Prophecis is a one-stop cloud native machine learning platform.
fabriziosalmi/proxmox-lxc-autoscale-ml
Automatically scale the LXC containers resources on Proxmox hosts with AI
aws-samples/amazon-sagemaker-drift-detection
This sample demonstrates how to setup an Amazon SageMaker MLOps end-to-end pipeline for Drift detection
sustainable-computing-io/clever
Container Level Energy-efficient VPA Recommender