thunlp/PromptPapers
Must-read papers on prompt-based tuning for pre-trained language models.
This is a curated list of research papers on prompt-based tuning, a technique used to adapt large pre-trained language models for specific tasks like classification or regression. It helps researchers and practitioners in natural language processing quickly find relevant literature on how to effectively use pre-trained models by organizing the training procedure differently, rather than relying on traditional classifiers. The resource provides paper titles, authors, and links to PDFs, serving anyone working with or studying advanced language models.
4,296 stars. No commits in the last 6 months.
Use this if you are an NLP researcher or practitioner looking for comprehensive academic resources and key papers on prompt-based tuning for pre-trained language models.
Not ideal if you are looking for an implementation guide, a software library, or practical tutorials on how to apply prompt-based tuning to your own projects.
Stars
4,296
Forks
388
Language
—
License
—
Category
Last pushed
Jul 17, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/thunlp/PromptPapers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google-research/prompt-tuning
Original Implementation of Prompt Tuning from Lester, et al, 2021
ZhangYuanhan-AI/NOAH
[TPAMI] Searching prompt modules for parameter-efficient transfer learning.
gmkim-ai/PromptKD
An official implementation of "PromptKD: Distilling Student-Friendly Knowledge for Generative...
zhengzangw/DoPrompt
Official implementation of PCS in essay "Prompt Vision Transformer for Domain Generalization"
Hzfinfdu/MPMP
ACL'2023: Multi-Task Pre-Training of Modular Prompt for Few-Shot Learning