Lichang-Chen/AlpaGasus
A better Alpaca Model Trained with Less Data (only 9k instructions of the original set)
This project helps machine learning engineers and researchers fine-tune large language models more efficiently. It allows you to train a language model using significantly less instructional data, producing a model with improved performance. The output is a refined language model suitable for various natural language processing tasks.
No commits in the last 6 months.
Use this if you want to fine-tune a powerful language model like Alpaca, but need to minimize the amount of labeled instruction data required for training.
Not ideal if you are looking for an out-of-the-box, pre-trained model for direct application without any further training or customization.
Stars
24
Forks
3
Language
HTML
License
—
Category
Last pushed
Jul 26, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Lichang-Chen/AlpaGasus"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
axolotl-ai-cloud/axolotl
Go ahead and axolotl questions
google/paxml
Pax is a Jax-based machine learning framework for training large scale models. Pax allows for...
JosefAlbers/PVM
Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon
iamarunbrahma/finetuned-qlora-falcon7b-medical
Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset
h2oai/h2o-wizardlm
Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning