deepchecks/deepchecks
Deepchecks: Tests for Continuous Validation of ML Models & Data. Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling to thoroughly test your data and models from research to production.
This tool helps data scientists and ML engineers ensure their machine learning models perform reliably from initial development through continuous operation. It takes your dataset and trained model, then runs a series of tests to identify potential issues like data quality problems, performance regressions, or shifts in data over time. The output is a clear report detailing any findings, helping you trust your models more.
3,990 stars. Used by 1 other package. Available on PyPI.
Use this if you need to systematically validate your machine learning models and the data they use, both before deployment and after they are running in production.
Not ideal if you are looking for a general-purpose data analysis or visualization tool that doesn't specifically focus on machine learning model validation.
Stars
3,990
Forks
289
Language
Python
License
—
Category
Last pushed
Dec 28, 2025
Commits (30d)
0
Dependencies
21
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/deepchecks/deepchecks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
treeverse/dvc
🦉 Data Versioning and ML Experiments
runpod/runpod-python
🐍 | Python library for RunPod API and serverless worker SDK.
microsoft/vscode-jupyter
VS Code Jupyter extension
4paradigm/OpenMLDB
OpenMLDB is an open-source machine learning database that provides a feature platform computing...
uber/petastorm
Petastorm library enables single machine or distributed training and evaluation of deep learning...