intellectronica/generative-learning
Using a reasoning LLM to learn a prompt from data
This project helps data professionals or content managers automatically generate precise instructions for Large Language Models (LLMs) to follow. By providing examples of input text and desired structured output (like YAML), the system learns how to transform new, similar text into the correct format. This is ideal for anyone who needs to consistently convert unstructured content into a structured representation without manually crafting complex prompts.
No commits in the last 6 months.
Use this if you have a dataset of text and its corresponding structured output, and you want an LLM to automatically learn the best way to perform that transformation.
Not ideal if you don't have example data or if your transformation task is highly creative and doesn't follow a clear input-output pattern.
Stars
25
Forks
1
Language
Jupyter Notebook
License
—
Category
Last pushed
May 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/intellectronica/generative-learning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
meta-prompting/meta-prompting
Official implementation of Meta Prompting for AI Systems (https://arxiv.org/abs/2311.11482)
auniquesun/Point-PRC
[NeurIPS 2024] Official implementation of the paper "Point-PRC: A Prompt Learning Based...
slashrebootofficial/simulated-metacognition-in-open-source-llms
This repository archives artifacts (prompts, configs, logs, and scripts) from a series of...
UKPLab/emnlp2024-code-prompting
Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs. EMNLP 2024
egmaminta/GEPA-Lite
A lightweight implementation of the GEPA (Genetic-Pareto) prompt optimization method for large...