zhenwang9102/coherence-boosting

Coherence boosting: When your pretrained language model is not paying enough attention (ACL 2022) https://arxiv.org/abs/2110.08294

19
/ 100
Experimental

This project helps improve the performance of large language models when they need to understand or generate text that requires paying attention to words far apart in a sentence or passage. It takes an existing text and a pre-trained language model, then applies a 'Coherence Boosting' technique to produce more accurate next-word predictions or better overall understanding. This is useful for researchers and practitioners working with advanced language models to tackle complex natural language processing tasks.

No commits in the last 6 months.

Use this if your pre-trained language model struggles with tasks requiring it to remember or connect information from a long context, such as predicting a story's ending or answering questions that require deep comprehension.

Not ideal if you are looking for a new language model architecture, as this is an inference procedure designed to enhance existing models.

natural-language-understanding natural-language-generation long-context-reasoning question-answering text-coherence
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 5 / 25

How are scores calculated?

Stars

15

Forks

1

Language

Python

License

Last pushed

Apr 23, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/zhenwang9102/coherence-boosting"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.