Tebmer/Rereading-LLM-Reasoning

EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for improving reasoning.

32
/ 100
Emerging

This project helps anyone using Large Language Models (LLMs) to improve the accuracy of their reasoning tasks, especially with complex questions. By simply re-presenting the initial question to the LLM, the system generates more accurate and consistent answers. The target users are researchers, data scientists, or practitioners who build or evaluate LLM-powered applications and need to boost their models' problem-solving capabilities without complex retraining.

No commits in the last 6 months.

Use this if you are working with Large Language Models and find them struggling with complex reasoning problems or providing inconsistent answers, and you want a straightforward method to improve their accuracy.

Not ideal if your primary goal is to optimize model inference speed for simple tasks, as the "re-reading" step adds a slight processing overhead.

LLM-reasoning natural-language-processing AI-model-evaluation problem-solving-AI cognitive-enhancement
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

29

Forks

3

Language

Python

License

Apache-2.0

Last pushed

Dec 10, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Tebmer/Rereading-LLM-Reasoning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.