jfc43/self-training-ensembles

Propose a principled and practically effective framework for unsupervised accuracy estimation and error detection tasks with theoretical analysis and state-of-the-art performance.

27
/ 100
Experimental

This project helps machine learning engineers and researchers validate the accuracy of their classification models without needing human-labeled data. It takes your unlabeled image datasets and trained classification models, then provides an estimate of how accurate your model is and identifies which predictions are likely incorrect. This is useful for anyone deploying models where obtaining new labeled data is costly or impossible.

No commits in the last 6 months.

Use this if you need to understand the performance and identify errors in a classification model when you only have access to unlabeled data for evaluation.

Not ideal if you already have high-quality labeled test sets or are working with non-classification tasks.

machine-learning-validation model-evaluation unsupervised-learning image-classification error-detection
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

16

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Feb 17, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jfc43/self-training-ensembles"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.