rxlqn/awesome-llm-self-reflection
augmented LLM with self reflection
This project helps developers and researchers understand and implement techniques that enable large language models (LLMs) to correct their own mistakes and improve their outputs. It provides a curated list of research papers and code implementations showcasing various 'self-reflection' strategies. The primary users are AI/ML researchers and practitioners building or fine-tuning LLMs for more accurate and reliable performance.
139 stars. No commits in the last 6 months.
Use this if you are a machine learning researcher or engineer looking to enhance the accuracy and reliability of large language models by enabling them to identify and correct errors in their own generated text or reasoning.
Not ideal if you are a non-technical end-user looking for a ready-to-use application; this is a resource for those who build and develop language models.
Stars
139
Forks
10
Language
—
License
MIT
Category
Last pushed
Nov 21, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/rxlqn/awesome-llm-self-reflection"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MMMU-Benchmark/MMMU
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal...
pat-jj/DeepRetrieval
[COLM’25] DeepRetrieval — 🔥 Training Search Agent by RLVR with Retrieval Outcome
lupantech/MathVista
MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts
x66ccff/liveideabench
[𝐍𝐚𝐭𝐮𝐫𝐞 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬] 🤖💡 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea...
ise-uiuc/magicoder
[ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct