jmanhype/ace-playbook
Self-improving LLM system using Generator-Reflector-Curator pattern for online learning from execution feedback
This project helps operations engineers and MLOps teams make their large language model (LLM) applications more reliable and accurate over time. It takes execution feedback from your LLM's performance and automatically learns from mistakes, improving its responses. The system outputs an 'append-only playbook' of improved strategies, ensuring your LLM continually adapts and performs better without constant manual intervention.
Use this if you are running LLM-powered agents or applications in production and need them to self-correct and learn from their errors to improve performance and reduce maintenance.
Not ideal if you are developing a new LLM from scratch or are looking for a fine-tuning framework, as this focuses on runtime adaptation of existing LLM systems.
Stars
27
Forks
6
Language
Python
License
—
Category
Last pushed
Mar 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jmanhype/ace-playbook"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/SynapseML
Simple and Distributed Machine Learning
codeintegrity-ai/mutahunter
Open Source, Language Agnostic Mutation Testing
elevenlabs/elevenlabs-android
Official ElevenLabs Kotlin SDK
MilesONerd/neurenix
Empowering Intelligent Futures, One Edge at a Time.
evilsocket/ergo
🧠A tool that makes AI easier.