egmaminta/GEPA-Lite
A lightweight implementation of the GEPA (Genetic-Pareto) prompt optimization method for large language models.
This tool helps anyone working with large language models to automatically refine and optimize their prompts for specific tasks. You provide an initial prompt and a set of example questions and answers, and the system iteratively tests, evaluates, and modifies the prompt to improve its performance. The outcome is a highly effective prompt tailored to achieve better results on your target task.
No commits in the last 6 months.
Use this if you need an efficient way to automatically improve the accuracy and quality of your LLM's responses for a single, well-defined task.
Not ideal if you're looking for a tool to optimize prompts for very broad, multi-purpose LLM applications without specific task evaluations.
Stars
55
Forks
4
Language
Python
License
MIT
Category
Last pushed
Aug 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/egmaminta/GEPA-Lite"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
meta-prompting/meta-prompting
Official implementation of Meta Prompting for AI Systems (https://arxiv.org/abs/2311.11482)
auniquesun/Point-PRC
[NeurIPS 2024] Official implementation of the paper "Point-PRC: A Prompt Learning Based...
slashrebootofficial/simulated-metacognition-in-open-source-llms
This repository archives artifacts (prompts, configs, logs, and scripts) from a series of...
UKPLab/emnlp2024-code-prompting
Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs. EMNLP 2024
Techolution/LazyLM
A prompting framework for getting foundational models to “lazily” evaluate their reasoning trace