THUDM/P-tuning-v2
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
P-tuning v2 helps machine learning engineers and researchers to adapt existing large language models (LLMs) for specific natural language tasks without the extensive computational cost of traditional fine-tuning. It takes a pre-trained transformer model and task-specific text data, then produces a specialized model ready for tasks like question answering or named entity recognition. This is for professionals building customized NLP applications.
2,077 stars. No commits in the last 6 months.
Use this if you need to optimize the performance of smaller or medium-sized pre-trained language models on specific NLP tasks, like sequence tagging or text classification, with limited computational resources.
Not ideal if you require full model fine-tuning for maximum performance on very large models or if you are not comfortable with machine learning development workflows.
Stars
2,077
Forks
207
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 16, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/THUDM/P-tuning-v2"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
ucinlp/autoprompt
AutoPrompt: Automatic Prompt Construction for Masked Language Models.
zjunlp/KnowPrompt
[WWW 2022] KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation...
zjunlp/PromptKG
PromptKG Family: a Gallery of Prompt Learning & KG-related research works, toolkits, and paper-list.
princeton-nlp/OptiPrompt
[NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240
VE-FORBRYDERNE/mtj-softtuner
Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab...