thunderous77/GLaPE
Official implementation for "GLaPE: Gold Label-agnostic Prompt Evaluation and Optimization for Large Language Models" (stay tuned & more will be updated)
This project helps AI developers and researchers optimize the prompts they use with large language models without needing human-labeled 'gold standard' answers. You input a dataset and an initial prompt, and it outputs an improved prompt that performs better. This is for anyone building or fine-tuning LLM applications who wants to improve prompt performance more efficiently.
No commits in the last 6 months.
Use this if you are developing LLM applications and want to iteratively improve your prompts without the time and cost of creating extensive human-labeled evaluation datasets.
Not ideal if you already have a perfectly curated, gold-standard labeled dataset for prompt evaluation, or if your prompt optimization needs are very simple.
Stars
8
Forks
1
Language
Python
License
—
Category
Last pushed
Feb 06, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/thunderous77/GLaPE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
meta-prompting/meta-prompting
Official implementation of Meta Prompting for AI Systems (https://arxiv.org/abs/2311.11482)
auniquesun/Point-PRC
[NeurIPS 2024] Official implementation of the paper "Point-PRC: A Prompt Learning Based...
slashrebootofficial/simulated-metacognition-in-open-source-llms
This repository archives artifacts (prompts, configs, logs, and scripts) from a series of...
UKPLab/emnlp2024-code-prompting
Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs. EMNLP 2024
egmaminta/GEPA-Lite
A lightweight implementation of the GEPA (Genetic-Pareto) prompt optimization method for large...