av/klmbr
klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs
This technique helps anyone working with Large Language Models (LLMs) to get more varied and less predictable text responses. By slightly altering the input text before it goes into the LLM, you can overcome cases where the model gives the same 'overfit' answer every time. This is for content creators, marketers, or researchers who need fresh perspectives or creative content from LLMs.
No commits in the last 6 months.
Use this if your LLM is giving repetitive or overly predictable answers, and you need to encourage more creative or less 'overfit' outputs.
Not ideal if you require strictly consistent, factually accurate, or grammatically perfect outputs where any alteration to the input or output would be detrimental.
Stars
86
Forks
2
Language
TeX
License
AGPL-3.0
Category
Last pushed
Sep 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/av/klmbr"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
meta-prompting/meta-prompting
Official implementation of Meta Prompting for AI Systems (https://arxiv.org/abs/2311.11482)
auniquesun/Point-PRC
[NeurIPS 2024] Official implementation of the paper "Point-PRC: A Prompt Learning Based...
slashrebootofficial/simulated-metacognition-in-open-source-llms
This repository archives artifacts (prompts, configs, logs, and scripts) from a series of...
UKPLab/emnlp2024-code-prompting
Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs. EMNLP 2024
egmaminta/GEPA-Lite
A lightweight implementation of the GEPA (Genetic-Pareto) prompt optimization method for large...