anthroos/openexp

Q-learning memory for Claude Code — your AI learns from experience. 16 MCP tools, hybrid retrieval, closed-loop rewards.

40
/ 100
Emerging

This project enhances your AI assistant (like Claude Code) by teaching it what actually works based on real outcomes. It takes in observations from your AI's daily tasks, like sales emails or code commits, and uses feedback from successful results to prioritize relevant memories. An AI agent, developer, or sales professional can then use this to ensure their AI always draws on the most effective past experiences.

Use this if you want your AI assistant to get smarter over time, learning which past decisions and strategies consistently lead to successful outcomes in your workflows.

Not ideal if you only need static memory storage for your AI without any outcome-based learning or prioritization of past experiences.

AI-agent-management AI-memory-optimization developer-productivity sales-automation workflow-intelligence
No Package No Dependents
Maintenance 13 / 25
Adoption 5 / 25
Maturity 9 / 25
Community 13 / 25

How are scores calculated?

Stars

9

Forks

2

Language

Python

License

MIT

Category

agent-framework

Last pushed

Mar 30, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/anthroos/openexp"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.