machengcheng2016/Subspace-Prompt-Learning

Official code for "Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models" (TCSVT'2023)

15
/ 100
Experimental

This project helps machine learning researchers and practitioners improve the performance of vision-language models like CLIP when adapting them to new image classification tasks with limited data. It takes an existing "prompt-tuned" model and fine-tunes it further to prevent overfitting and enhance generalization. The output is a more robust and accurate image classification model, particularly useful for those working with few-shot learning or novel object categories.

No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer working with vision-language models and want to prevent overfitting and improve generalization when fine-tuning them for new image recognition tasks, especially with limited data.

Not ideal if you are looking for a plug-and-play solution for general image classification without prior experience in prompt tuning or vision-language model adaptation.

vision-language models few-shot learning image classification model fine-tuning machine learning research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

28

Forks

Language

Python

License

Last pushed

Dec 27, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/machengcheng2016/Subspace-Prompt-Learning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.