arazd/ResidualPrompts
Residual Prompt Tuning: a method for faster and better prompt tuning.
This project offers a refined approach to prompt tuning, a technique used to adapt large language models (LLMs) for specific tasks without retraining the entire model. It takes an existing LLM and a set of task-specific data, and outputs a more accurate and stable 'soft prompt' that improves the LLM's performance for that task. This is for machine learning engineers or researchers who are fine-tuning LLMs for various natural language processing applications.
No commits in the last 6 months.
Use this if you are a machine learning practitioner looking to improve the efficiency and performance of adapting large language models for specific downstream tasks.
Not ideal if you are not working directly with prompt tuning methods for large language models, or if you prefer full model fine-tuning over prompt-based approaches.
Stars
57
Forks
8
Language
Python
License
Apache-2.0
Category
Last pushed
May 10, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/arazd/ResidualPrompts"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
THUDM/P-tuning-v2
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
ucinlp/autoprompt
AutoPrompt: Automatic Prompt Construction for Masked Language Models.
zjunlp/KnowPrompt
[WWW 2022] KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation...
zjunlp/PromptKG
PromptKG Family: a Gallery of Prompt Learning & KG-related research works, toolkits, and paper-list.
princeton-nlp/OptiPrompt
[NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240