WalterSimoncini/fungivision

Library implementation of "No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations"

23
/ 100
Experimental

This library helps machine learning practitioners improve the performance of image, text, or audio classification and image segmentation tasks. It takes raw input data (like images from a dataset) and a pre-trained vision transformer model. It then generates enhanced "gradient features" that, when combined with the model's standard outputs, lead to more accurate downstream predictions. This is for researchers or engineers working with self-supervised learning for feature extraction.

No commits in the last 6 months.

Use this if you are using pre-trained vision transformer models and want to boost their performance on tasks like image classification or semantic segmentation without extensive retraining.

Not ideal if you need a solution for models other than vision transformers or if your primary goal is to train models from scratch, as this focuses on improving frozen representations.

feature-extraction image-classification semantic-segmentation self-supervised-learning machine-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

40

Forks

Language

Python

License

MIT

Last pushed

Oct 31, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/WalterSimoncini/fungivision"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.