arielshad/balagan-agent
Chaos Engineering for AI Agents
This helps AI engineers and SREs rigorously test their AI agents before deployment. It takes your existing AI agent, simulates real-world problems like tool failures, delays, or corrupted data, and then shows you how resilient your agent is. The output includes metrics like recovery time and a reliability score, allowing you to identify and fix vulnerabilities proactively.
Available on PyPI.
Use this if you are building AI agents for production and need to ensure they are robust and reliable when external services or data sources behave unexpectedly.
Not ideal if you are only experimenting with AI agents in non-critical environments where reliability and fault tolerance are not primary concerns.
Stars
8
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 15, 2026
Commits (30d)
0
Dependencies
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/arielshad/balagan-agent"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related agents
Clawland-AI/Geneclaw
Self-evolving AI agent framework with 5-layer safety gatekeeper. Agents observe failures,...
suryast/agent-taxonomy
🧬 Evolutionary taxonomy and self-improvement framework for AI agents — Lamarckian inheritance,...
evoplex/evoplex
Evoplex is a fast, robust and extensible platform for developing agent-based models and...
adrianwedd/AI-SWA
SelfArchitectAI - A self-evolving software architect agent that reasons about its own design,...
MrTsepa/autoevolve
AI agent evolving strategies through automated self-play overnight. Generic framework with...