Lake-Wang/NLP_Adapter_Parameter_Allocation
This project investigates the robustness of parameter allocation strategies in Mix-and-Match (MAM) Adapters for PEFT across different tunable budgets. Our ablation study reveals that optimal allocation ratios vary by task and scale, challenging the generalizability of default MAM configurations.
No commits in the last 6 months.
Stars
1
Forks
—
Language
Python
License
—
Category
Last pushed
Jun 10, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Lake-Wang/NLP_Adapter_Parameter_Allocation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
uds-lsv/bert-stable-fine-tuning
On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
MeryylleA/lunariscodex
A high-performance PyTorch toolkit for pre-training modern, Llama-style language models. Based...
VanekPetr/flan-t5-text-classifier
Fine-tuning of Flan-5T LLM for text classification 🤖 focuses on adapting a state-of-the-art...
kingTLE/literary-alpaca2
从词表到微调这就是你所需的一切
YuweiYin/HLT-MT
[IJCAI-ECAI 2022] HLT-MT: High-resource Language-specific Training for Multilingual Neural...