awesome-ml-monitoring and awesome-ml-experiment-management
These tools are complements because experiment management tracks and organizes the various iterations of ML models, while ML monitoring ensures the quality and performance of those models once they are deployed.
About awesome-ml-monitoring
awesome-mlops/awesome-ml-monitoring
A curated list of awesome open source tools and commercial products for monitoring data quality, monitoring model performance, and profiling data 🚀
Staying on top of your machine learning models' performance and the quality of data feeding them is crucial after they've been deployed. This project provides a curated list of tools that help you monitor your ML models and data, identify issues like data drift or model decay, and get insights into why they might be underperforming. Data scientists, MLOps engineers, and analytics professionals will find this useful for maintaining healthy ML systems.
About awesome-ml-experiment-management
awesome-mlops/awesome-ml-experiment-management
A curated list of awesome open source tools and commercial products for ML Experiment Tracking and Management 🚀
When you're developing machine learning models, you often run many experiments with different datasets, model architectures, and parameters. This resource helps you keep track of all those experiments, including the inputs, configurations, and results, so you can easily compare them and understand what worked best. It's for anyone involved in developing and iterating on machine learning models, from individual data scientists to ML engineering teams.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work