zhenwang9102/coherence-boosting
Coherence boosting: When your pretrained language model is not paying enough attention (ACL 2022) https://arxiv.org/abs/2110.08294
This project helps improve the performance of large language models when they need to understand or generate text that requires paying attention to words far apart in a sentence or passage. It takes an existing text and a pre-trained language model, then applies a 'Coherence Boosting' technique to produce more accurate next-word predictions or better overall understanding. This is useful for researchers and practitioners working with advanced language models to tackle complex natural language processing tasks.
No commits in the last 6 months.
Use this if your pre-trained language model struggles with tasks requiring it to remember or connect information from a long context, such as predicting a story's ending or answering questions that require deep comprehension.
Not ideal if you are looking for a new language model architecture, as this is an inference procedure designed to enhance existing models.
Stars
15
Forks
1
Language
Python
License
—
Category
Last pushed
Apr 23, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/zhenwang9102/coherence-boosting"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
THUDM/P-tuning-v2
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
ucinlp/autoprompt
AutoPrompt: Automatic Prompt Construction for Masked Language Models.
zjunlp/KnowPrompt
[WWW 2022] KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation...
zjunlp/PromptKG
PromptKG Family: a Gallery of Prompt Learning & KG-related research works, toolkits, and paper-list.
princeton-nlp/OptiPrompt
[NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240