NiklasvonM/Self-Training

Iterative training on pseudo-labeled data experiment on the MNIST-dataset

32
/ 100
Emerging

This project helps machine learning researchers understand how to effectively train image classification models when only a small amount of labeled data is available. It takes a small set of labeled images and a large set of unlabeled images, then iteratively uses the model's own predictions to expand the training data. The output demonstrates how accuracy improves over iterations and how confidence thresholds affect the training process, providing insights for researchers studying semi-supervised learning.

No commits in the last 6 months.

Use this if you are a machine learning researcher exploring semi-supervised learning techniques and want to experiment with iterative pseudo-labeling for image classification.

Not ideal if you are looking for a ready-to-use solution for production image classification or a tool for datasets other than images.

semi-supervised learning image classification machine learning research data scarcity model training
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

12

Forks

2

Language

Python

License

MIT

Last pushed

Sep 03, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/NiklasvonM/Self-Training"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.