deepchecks/deepchecks

Deepchecks: Tests for Continuous Validation of ML Models & Data. Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling to thoroughly test your data and models from research to production.

61
/ 100
Established

This tool helps data scientists and ML engineers ensure their machine learning models perform reliably from initial development through continuous operation. It takes your dataset and trained model, then runs a series of tests to identify potential issues like data quality problems, performance regressions, or shifts in data over time. The output is a clear report detailing any findings, helping you trust your models more.

3,990 stars. Used by 1 other package. Available on PyPI.

Use this if you need to systematically validate your machine learning models and the data they use, both before deployment and after they are running in production.

Not ideal if you are looking for a general-purpose data analysis or visualization tool that doesn't specifically focus on machine learning model validation.

machine-learning-validation model-monitoring data-quality-assurance MLOps AI-governance
Maintenance 6 / 25
Adoption 11 / 25
Maturity 25 / 25
Community 19 / 25

How are scores calculated?

Stars

3,990

Forks

289

Language

Python

License

Last pushed

Dec 28, 2025

Commits (30d)

0

Dependencies

21

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/deepchecks/deepchecks"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.