darfaz/clawmoat

🦀 Security moat for AI agents. Runtime protection against prompt injection, tool misuse, and data exfiltration.

52
/ 100
Established

This helps secure your AI agents, such as those used for coding, customer service, or data analysis, by preventing them from performing harmful actions. It takes agent input or output (like instructions, tool results, or responses) and flags content that could lead to data leaks, dangerous commands, or hijacked behavior. This tool is for security engineers, AI developers, and operations teams deploying AI agents in production environments.

Available on npm.

Use this if you are deploying AI agents that have access to sensitive data, external tools, or system resources and need to protect against prompt injection, data exfiltration, and misuse.

Not ideal if your AI agents operate in a fully isolated, sandboxed environment without access to any external systems or sensitive information.

AI-security agent-runtime-protection data-loss-prevention prompt-injection-prevention application-security
No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 20 / 25
Community 15 / 25

How are scores calculated?

Stars

26

Forks

5

Language

JavaScript

License

MIT

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/darfaz/clawmoat"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.