jmanhype/ace-playbook

Self-improving LLM system using Generator-Reflector-Curator pattern for online learning from execution feedback

40
/ 100
Emerging

This project helps operations engineers and MLOps teams make their large language model (LLM) applications more reliable and accurate over time. It takes execution feedback from your LLM's performance and automatically learns from mistakes, improving its responses. The system outputs an 'append-only playbook' of improved strategies, ensuring your LLM continually adapts and performs better without constant manual intervention.

Use this if you are running LLM-powered agents or applications in production and need them to self-correct and learn from their errors to improve performance and reduce maintenance.

Not ideal if you are developing a new LLM from scratch or are looking for a fine-tuning framework, as this focuses on runtime adaptation of existing LLM systems.

LLM-operations AI-agent-management production-AI system-reliability MLOps
No License No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 7 / 25
Community 16 / 25

How are scores calculated?

Stars

27

Forks

6

Language

Python

License

Last pushed

Mar 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jmanhype/ace-playbook"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.