backbay-labs/hush
Portable security rules for the tool boundary of AI agents
AI agents often need clear rules to prevent unintended actions like accessing sensitive files or making unauthorized network calls. HushSpec provides a standard way to define these security rules, which act as guardrails for your agents. You input a policy file describing what your AI agent can and cannot do, and it ensures the agent's actions align with those rules, providing audit trails for compliance. This is for anyone managing or deploying AI agents who needs to enforce strict operational boundaries and ensure secure, predictable behavior.
Use this if you need to define and enforce clear, portable security policies for your AI agents across different environments and programming languages.
Not ideal if you are looking for a system to train AI agents or manage their overall lifecycle beyond security policy enforcement.
Stars
7
Forks
—
Language
Rust
License
Apache-2.0
Category
Last pushed
Mar 16, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/backbay-labs/hush"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier