ilias-ant/adversarial-validation

A tiny framework to perform adversarial validation of your training and test data.

14
/ 100
Experimental

When building machine learning models, it's crucial that your training data accurately represents the real-world data your model will encounter. This tool helps data scientists and ML practitioners determine if their training and test datasets come from the same distribution. It takes your prepared training and test data and outputs an assessment of whether they are similar enough to trust your model's real-world predictions.

No commits in the last 6 months.

Use this if you need to quickly check if your machine learning model's training and test datasets are sufficiently similar to ensure reliable performance on unseen data.

Not ideal if you are looking for a tool to automatically clean or pre-process your data, or if you need to perform complex feature engineering.

machine-learning-validation data-quality-assessment predictive-modeling dataset-comparison model-deployment
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

24

Forks

Language

Python

License

Last pushed

Jan 13, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ilias-ant/adversarial-validation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.