darfaz/clawmoat
🦀 Security moat for AI agents. Runtime protection against prompt injection, tool misuse, and data exfiltration.
This helps secure your AI agents, such as those used for coding, customer service, or data analysis, by preventing them from performing harmful actions. It takes agent input or output (like instructions, tool results, or responses) and flags content that could lead to data leaks, dangerous commands, or hijacked behavior. This tool is for security engineers, AI developers, and operations teams deploying AI agents in production environments.
Available on npm.
Use this if you are deploying AI agents that have access to sensitive data, external tools, or system resources and need to protect against prompt injection, data exfiltration, and misuse.
Not ideal if your AI agents operate in a fully isolated, sandboxed environment without access to any external systems or sensitive information.
Stars
26
Forks
5
Language
JavaScript
License
MIT
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/darfaz/clawmoat"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related agents
heshengtao/super-agent-party
⭐ All-in-one AI companion! Super Agent Party = Self hosted neuro sama + openclaw! ⭐...
dataelement/Clawith
OpenClaw for Teams
scottgl9/LeanClaw
LeanClaw is a high-efficiency, security-first AI assistant runtime built for fast local...
romanklis/openclaw-contained
TaskForge runs AI agents in sandboxed Docker containers with capability-based security. Agents...
quoroom-ai/room
Open-source earning-focused swarm intelligence engine. Self-governing AI collectives (queen,...