ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require approvals, and produce audit-ready decision trails.
This helps AI developers and operations teams prevent autonomous agents from making costly errors in real-world scenarios. It takes agent-proposed actions and enforces guard policies, requiring human approval for sensitive operations, and outputs a complete, auditable decision trail. You'd use this to ensure your AI agents operate safely and within defined boundaries.
121 stars. Available on npm.
Use this if you need to add a critical layer of safety and control to your AI agents, ensuring they don't execute risky or unintended actions without proper oversight.
Not ideal if you're only looking for basic logging or debugging of agent behavior without the need for pre-execution policy enforcement or human intervention.
Stars
121
Forks
23
Language
JavaScript
License
MIT
Category
Last pushed
Mar 14, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/ucsandman/DashClaw"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related agents
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier
project-codeguard/rules
Project CodeGuard is an AI model-agnostic security framework and ruleset that embeds...