aganthos/clawloop
Make your agents learn from experience. One protocol for weights, harness and routing.
This project helps AI developers make their language model (LLM) agents smarter over time by learning from their experiences. It takes agent-environment interactions as input and outputs improved agent behavior, playbooks of strategies, and potentially fine-tuned model weights. AI engineers and researchers working with LLM-powered autonomous agents would use this to build more capable and robust systems.
Use this if you are developing AI agents and want them to automatically improve their performance and adapt to new situations by learning from their past successes and failures.
Not ideal if you are looking for a pre-trained, off-the-shelf agent or if you do not have control over the agent's code or access to its interaction traces.
Stars
17
Forks
1
Language
Python
License
—
Category
Last pushed
Mar 31, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/aganthos/clawloop"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ghostwright/phantom
An AI co-worker with its own computer. Self-evolving, persistent memory, MCP server, secure...
dograh-hq/dograh
Open Source Voice Agent Platform
gmickel/flow-next
Plan-first AI workflow plugin for Claude Code, OpenAI Codex, and Factory Droid. Zero-dep task...
joseairosa/recall
Persistent cross-session memory for Claude & AI agents. Self-host on Redis/Valkey, or use the...
lintsinghua/claude-code-book
《御舆:解码 Agent Harness》42万字拆解 AI Agent 的Harness骨架与神经 —— Claude Code 架构深度剖析,15 章从对话循环到构建你自己的 Agent...