noahshinn/reflexion

[NeurIPS 2023] Reflexion: Language Agents with Verbal Reinforcement Learning

46
/ 100
Emerging

This project helps researchers and developers explore how language models can improve their reasoning and decision-making by learning from their mistakes. It takes a language model agent and a set of questions or tasks as input, then outputs improved answers or actions by enabling the agent to reflect on its previous attempts. This is for AI researchers and practitioners who are experimenting with advanced language model agents.

3,093 stars. No commits in the last 6 months.

Use this if you are developing or studying advanced AI agents and want to experiment with methods that allow them to self-correct and improve their performance on complex reasoning or decision-making tasks.

Not ideal if you are looking for a plug-and-play solution for general language model applications without a focus on agent-based learning and self-reflection.

AI agent development language model research reinforcement learning autonomous reasoning cognitive AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

3,093

Forks

298

Language

Python

License

MIT

Last pushed

Jan 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/noahshinn/reflexion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.