ucsandman/DashClaw

🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require approvals, and produce audit-ready decision trails.

63
/ 100
Established

This helps AI developers and operations teams prevent autonomous agents from making costly errors in real-world scenarios. It takes agent-proposed actions and enforces guard policies, requiring human approval for sensitive operations, and outputs a complete, auditable decision trail. You'd use this to ensure your AI agents operate safely and within defined boundaries.

121 stars. Available on npm.

Use this if you need to add a critical layer of safety and control to your AI agents, ensuring they don't execute risky or unintended actions without proper oversight.

Not ideal if you're only looking for basic logging or debugging of agent behavior without the need for pre-execution policy enforcement or human intervention.

AI safety agent governance AI operations risk management human-in-the-loop
No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 20 / 25
Community 20 / 25

How are scores calculated?

Stars

121

Forks

23

Language

JavaScript

License

MIT

Last pushed

Mar 14, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/ucsandman/DashClaw"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.