BaohaoLiao/mefts

[NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning

19
/ 100
Experimental

This helps machine learning engineers and researchers to fine-tune large pre-trained language models like BERT more efficiently. It takes an existing pre-trained model and fine-tunes it on specific tasks, producing a model that performs well on those tasks while using less memory. This is especially useful for those working with limited computational resources or very large models.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher looking to fine-tune large language models like BERT for natural language processing tasks more efficiently, especially concerning memory usage.

Not ideal if you are not working with pre-trained large language models, or if you need to fine-tune models other than those currently supported like RoBERTa, BART, or OPT.

natural-language-processing large-language-models model-fine-tuning deep-learning-optimization computational-efficiency
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 4 / 25

How are scores calculated?

Stars

33

Forks

1

Language

Python

License

Last pushed

Jun 02, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/BaohaoLiao/mefts"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.