BaohaoLiao/mefts
[NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
This helps machine learning engineers and researchers to fine-tune large pre-trained language models like BERT more efficiently. It takes an existing pre-trained model and fine-tunes it on specific tasks, producing a model that performs well on those tasks while using less memory. This is especially useful for those working with limited computational resources or very large models.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher looking to fine-tune large language models like BERT for natural language processing tasks more efficiently, especially concerning memory usage.
Not ideal if you are not working with pre-trained large language models, or if you need to fine-tune models other than those currently supported like RoBERTa, BART, or OPT.
Stars
33
Forks
1
Language
Python
License
—
Category
Last pushed
Jun 02, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/BaohaoLiao/mefts"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scaleapi/llm-engine
Scale LLM Engine public repository
AGI-Arena/MARS
The official implementation of MARS: Unleashing the Power of Variance Reduction for Training Large Models
modelscope/easydistill
a toolkit on knowledge distillation for large language models
AGI-Edgerunners/LLM-Adapters
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient...
Wang-ML-Lab/bayesian-peft
Bayesian Low-Rank Adaptation of LLMs: BLoB [NeurIPS 2024] and TFB [NeurIPS 2025]