salesforce/Overture
Library for soft prompt tuning
This library helps natural language processing researchers explore and experiment with 'prompt tuning' for large language models. You input a pre-trained language model and text data for a specific task (like text classification or question answering), and it outputs a small set of learnable 'prompts' that guide the model to perform that task effectively. It's designed for NLP researchers developing new methods for efficient model adaptation.
No commits in the last 6 months.
Use this if you are an NLP researcher interested in developing or experimenting with prompt tuning methods for tasks like masked language modeling, text classification, or question answering.
Not ideal if you are looking for a highly abstracted, plug-and-play solution for applying existing prompt tuning techniques without deep customization.
Stars
22
Forks
1
Language
Python
License
BSD-3-Clause
Category
Last pushed
Jun 12, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/salesforce/Overture"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
THUDM/P-tuning-v2
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
ucinlp/autoprompt
AutoPrompt: Automatic Prompt Construction for Masked Language Models.
zjunlp/KnowPrompt
[WWW 2022] KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation...
zjunlp/PromptKG
PromptKG Family: a Gallery of Prompt Learning & KG-related research works, toolkits, and paper-list.
princeton-nlp/OptiPrompt
[NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240