Aradhye2002/selective-peft-toolkit
Official implementation of the paper "Step-by-Step Unmasking for Parameter-Efficient Fine-tuning of Large Language Models"
This toolkit helps machine learning engineers and researchers efficiently adapt large language models (LLMs) and other deep learning models for specific tasks without needing to update all millions of their parameters. You input a pre-trained model and your specific dataset, and it outputs a model adapted to your data with significantly fewer updated parameters, making it faster and less resource-intensive. This is ideal for those who need to fine-tune large models on custom datasets with limited computational resources.
No commits in the last 6 months.
Use this if you need to fine-tune large pre-trained models for specific tasks but want to minimize the computational cost and storage requirements by updating only a small, critical subset of parameters.
Not ideal if you have unlimited computational resources and want to perform a full fine-tuning of all model parameters, or if your model is small enough that parameter-efficient techniques offer negligible benefits.
Stars
9
Forks
1
Language
Python
License
MIT
Category
Last pushed
Jul 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Aradhye2002/selective-peft-toolkit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Goekdeniz-Guelmez/mlx-lm-lora
Train Large Language Models on MLX.
uber-research/PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
VHellendoorn/Code-LMs
Guide to using pre-trained large language models of source code
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
jarobyte91/pytorch_beam_search
A lightweight implementation of Beam Search for sequence models in PyTorch.