OSU-MLB/ViT_PEFT_Vision
[CVPR'25 (Highlight)] Lessons and Insights from a Unifying Study of Parameter-Efficient Fine-Tuning (PEFT) in Visual Recognition
This project provides a comprehensive toolkit for researchers working with computer vision models, especially large pre-trained ones. It allows you to systematically evaluate and compare 16 different 'parameter-efficient fine-tuning' (PEFT) methods across various visual recognition tasks, data sizes, and domain differences. The primary users are vision AI researchers who need to consistently assess and reproduce the performance of different PEFT techniques.
No commits in the last 6 months.
Use this if you are a computer vision researcher needing to rigorously test and compare various parameter-efficient fine-tuning methods for large pre-trained models on different image datasets and scenarios.
Not ideal if you are an end-user simply looking to apply a pre-trained vision model without needing to research or compare different fine-tuning techniques yourself.
Stars
46
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Jun 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/OSU-MLB/ViT_PEFT_Vision"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Jittor/jittor
Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.
zhanghang1989/ResNeSt
ResNeSt: Split-Attention Networks
berniwal/swin-transformer-pytorch
Implementation of the Swin Transformer in PyTorch.
NVlabs/FasterViT
[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with...
ViTAE-Transformer/ViTPose
The official repo for [NeurIPS'22] "ViTPose: Simple Vision Transformer Baselines for Human Pose...