madaan/self-refine
LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.
This project helps developers improve the quality of text and code generated by large language models (LLMs). It allows an LLM to generate an initial output, then critique its own work by providing feedback, and finally use that feedback to refine the output. This iterative self-correction process is useful for anyone working with LLMs to produce better, more accurate results for tasks like generating acronyms, improving code readability, or creating dialogue responses.
785 stars. No commits in the last 6 months.
Use this if you are a developer aiming to enhance the accuracy and quality of outputs from your LLM applications through an automated self-correction loop.
Not ideal if you are looking for a plug-and-play solution without needing to integrate and manage LLM prompts and iterations.
Stars
785
Forks
68
Language
Python
License
Apache-2.0
Category
Last pushed
Oct 04, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/madaan/self-refine"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
genlm/genlm-control
Controlled text generation with programmable constraints
Intelligent-CAT-Lab/AlphaTrans
Artifact repository for the paper "AlphaTrans: A Neuro-Symbolic Compositional Approach for...
PCI-ORG/PCI-Personnel
Policy Change Index for Personnel (PCI-Personnel)
gokmengokhan/deo-llm-reframing
Replication materials: Testing Distance-Engagement Oscillation as a prompting framework for...
hemangjoshi37a/o1-meta-prompt
This project aims to emulate some of the advanced reasoning capabilities seen in models like...