madaan/self-refine

LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.

43
/ 100
Emerging

This project helps developers improve the quality of text and code generated by large language models (LLMs). It allows an LLM to generate an initial output, then critique its own work by providing feedback, and finally use that feedback to refine the output. This iterative self-correction process is useful for anyone working with LLMs to produce better, more accurate results for tasks like generating acronyms, improving code readability, or creating dialogue responses.

785 stars. No commits in the last 6 months.

Use this if you are a developer aiming to enhance the accuracy and quality of outputs from your LLM applications through an automated self-correction loop.

Not ideal if you are looking for a plug-and-play solution without needing to integrate and manage LLM prompts and iterations.

LLM application development prompt engineering natural language generation code generation AI model refinement
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

785

Forks

68

Language

Python

License

Apache-2.0

Last pushed

Oct 04, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/madaan/self-refine"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.