ryan-air/Alpaca-3B-Fine-Tuned
In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 3B parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.
No commits in the last 6 months.
Stars
6
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jul 09, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ryan-air/Alpaca-3B-Fine-Tuned"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Goekdeniz-Guelmez/mlx-lm-lora
Train Large Language Models on MLX.
uber-research/PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
VHellendoorn/Code-LMs
Guide to using pre-trained large language models of source code
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
jarobyte91/pytorch_beam_search
A lightweight implementation of Beam Search for sequence models in PyTorch.