jyjohnchoi/SMoP
The repository contains the code for our EMNLP 2023 paper "SMoP: Towards Efficient and Effective Prompt Tuning with Sparse Mixture-of-Prompts", written by Joon-Young Choi, Junho Kim, Jun-Hyung Park, Mok-Wing Lam, and SangKeun Lee.
This project helps machine learning engineers and researchers optimize how large language models (LLMs) perform on specific tasks. It takes an LLM and a dataset, then applies a technique called Sparse Mixture-of-Prompts (SMoP) to efficiently train the model for better accuracy. The output is a fine-tuned LLM ready for deployment.
No commits in the last 6 months.
Use this if you need to improve the performance of a large language model on a particular natural language understanding task without the computational cost of full model fine-tuning.
Not ideal if you are a practitioner looking for a ready-to-use LLM for a common task, as this tool requires familiarity with machine learning training pipelines and custom dataset preparation.
Stars
9
Forks
5
Language
Python
License
—
Category
Last pushed
Nov 06, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/jyjohnchoi/SMoP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
THUDM/P-tuning-v2
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
ucinlp/autoprompt
AutoPrompt: Automatic Prompt Construction for Masked Language Models.
zjunlp/KnowPrompt
[WWW 2022] KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation...
zjunlp/PromptKG
PromptKG Family: a Gallery of Prompt Learning & KG-related research works, toolkits, and paper-list.
princeton-nlp/OptiPrompt
[NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240