jfc43/self-training-ensembles
Propose a principled and practically effective framework for unsupervised accuracy estimation and error detection tasks with theoretical analysis and state-of-the-art performance.
This project helps machine learning engineers and researchers validate the accuracy of their classification models without needing human-labeled data. It takes your unlabeled image datasets and trained classification models, then provides an estimate of how accurate your model is and identifies which predictions are likely incorrect. This is useful for anyone deploying models where obtaining new labeled data is costly or impossible.
No commits in the last 6 months.
Use this if you need to understand the performance and identify errors in a classification model when you only have access to unlabeled data for evaluation.
Not ideal if you already have high-quality labeled test sets or are working with non-classification tasks.
Stars
16
Forks
1
Language
Python
License
Apache-2.0
Last pushed
Feb 17, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jfc43/self-training-ensembles"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
EmuKit/emukit
A Python-based toolbox of various methods in decision making, uncertainty quantification and...
google/uncertainty-baselines
High-quality implementations of standard and SOTA methods on a variety of tasks.
nielstron/quantulum3
Library for unit extraction - fork of quantulum for python3
IBM/UQ360
Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you...
aamini/evidential-deep-learning
Learn fast, scalable, and calibrated measures of uncertainty using neural networks!