UKPLab/emnlp2024-code-prompting

Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs. EMNLP 2024

35
/ 100
Emerging

This project helps AI/ML researchers and practitioners explore how using code-based prompts can improve the conditional reasoning abilities of Large Language Models (LLMs). It takes natural language problems and transforms them into code, which is then fed to the LLM. The output helps understand if code prompts lead to more accurate answers and better tracking of entities compared to traditional text-based prompts.

No commits in the last 6 months.

Use this if you are a researcher or developer working with LLMs and want to evaluate or improve their ability to handle complex conditional logic and scenario-based question answering.

Not ideal if you are looking for a ready-to-use application or a production-grade solution for end-user natural language processing tasks.

LLM-evaluation prompt-engineering natural-language-understanding AI-research conditional-reasoning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

27

Forks

4

Language

Python

License

Apache-2.0

Last pushed

Nov 13, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/UKPLab/emnlp2024-code-prompting"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.