ilias-ant/adversarial-validation
A tiny framework to perform adversarial validation of your training and test data.
When building machine learning models, it's crucial that your training data accurately represents the real-world data your model will encounter. This tool helps data scientists and ML practitioners determine if their training and test datasets come from the same distribution. It takes your prepared training and test data and outputs an assessment of whether they are similar enough to trust your model's real-world predictions.
No commits in the last 6 months.
Use this if you need to quickly check if your machine learning model's training and test datasets are sufficiently similar to ensure reliable performance on unseen data.
Not ideal if you are looking for a tool to automatically clean or pre-process your data, or if you need to perform complex feature engineering.
Stars
24
Forks
—
Language
Python
License
—
Category
Last pushed
Jan 13, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ilias-ant/adversarial-validation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research