zwcolin/Domain-Robustness-Prompt-Tuning
Implementation of the report: on the domain robustness of prefix and prompt tuning
This project helps machine learning engineers and researchers evaluate how well language models perform when they are fine-tuned for specific tasks but then applied to new, slightly different types of data. It takes in configurations for language model training and testing, and outputs metrics that show the model's robustness across various data domains. This is for professionals working with natural language processing models.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher who needs to assess the domain robustness of language models after prompt or prefix tuning.
Not ideal if you are looking for a pre-trained, ready-to-use language model for direct application without needing to evaluate its tuning robustness.
Stars
20
Forks
3
Language
Python
License
—
Category
Last pushed
Mar 10, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/zwcolin/Domain-Robustness-Prompt-Tuning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
THUDM/P-tuning-v2
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
ucinlp/autoprompt
AutoPrompt: Automatic Prompt Construction for Masked Language Models.
zjunlp/KnowPrompt
[WWW 2022] KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation...
zjunlp/PromptKG
PromptKG Family: a Gallery of Prompt Learning & KG-related research works, toolkits, and paper-list.
princeton-nlp/OptiPrompt
[NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240