rgklab/detectron

Official repository for the ICLR 2023 paper "A Learning Based Hypothesis Test for Harmful Covariate Shift"

35
/ 100
Emerging

This tool helps machine learning practitioners determine if a pre-trained model can be trusted to make reliable predictions on a new, unlabeled dataset. You provide your existing model and a new set of data, and it outputs a decision on whether the new data is similar enough to your model's training data. It's designed for data scientists, ML engineers, or anyone deploying models into real-world environments.

No commits in the last 6 months.

Use this if you need to automatically assess whether new, unlabeled data has 'shifted' too much from your model's original training data, potentially causing your model to perform poorly.

Not ideal if you already know your new data is significantly different and you need tools for model retraining or adaptation, rather than just detection.

machine-learning-deployment model-monitoring data-drift-detection model-reliability AI-safety
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

11

Forks

3

Language

Python

License

GPL-3.0

Last pushed

Jan 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/rgklab/detectron"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.