kingabzpro/Automating-Machine-Learning-Testing

Automating Machine Learning Testing using GitHub Actions and DeepChecks

40
/ 100
Emerging

This project helps MLOps engineers and data scientists ensure the reliability of their machine learning models before deployment. It automates checks on new model versions and their data, providing instant feedback on potential issues like data drift or performance degradation. Anyone responsible for maintaining production-ready ML systems will find this useful.

No commits in the last 6 months.

Use this if you need to automatically validate new machine learning models and their input data as part of your continuous integration workflow.

Not ideal if you are looking for a tool to manually test ad-hoc machine learning models or perform one-off data quality checks.

MLOps model-validation data-quality continuous-integration machine-learning-engineering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

41

Forks

8

Language

Python

License

Apache-2.0

Last pushed

Jul 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/kingabzpro/Automating-Machine-Learning-Testing"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.