seonghyeonye/TAPP

[AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following

29
/ 100
Experimental

This project helps researchers in natural language processing evaluate how well large language models (LLMs) can follow instructions. It takes in datasets of instructions and desired outputs, then applies different prompting strategies to LLMs to see how accurately they generate the correct responses. Anyone working on improving or analyzing LLMs for instruction-following tasks would find this valuable.

No commits in the last 6 months.

Use this if you are an NLP researcher or practitioner studying the effectiveness of various prompting techniques for improving LLM's ability to follow complex instructions.

Not ideal if you are looking for a ready-to-use application for a specific real-world problem, as this is a research framework for evaluation.

natural-language-processing large-language-models prompt-engineering model-evaluation ai-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 4 / 25

How are scores calculated?

Stars

79

Forks

2

Language

Python

License

MIT

Last pushed

Sep 13, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/seonghyeonye/TAPP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.