dormstern/leashed
AI got hands. This is the leash. Policy, audit, kill switch for any AI agent with access to your accounts.
This helps you safely use AI agents by setting clear boundaries on what they can do with your online accounts, like a sales bot accessing LinkedIn or a work assistant managing emails. You provide a policy outlining allowed and denied actions, and in return, you get an agent that operates within your specified limits with an audit trail of its activity. This is for anyone who wants to leverage AI agents for productivity but needs to ensure security and control over their digital assets.
Available on npm.
Use this if you need to give an AI agent access to your online accounts but want to restrict its actions, set time limits, and keep a log of everything it attempts to do.
Not ideal if you need to restrict AI actions based on specific URLs or require deep semantic understanding of agent commands beyond literal pattern matching.
Stars
12
Forks
—
Language
TypeScript
License
MIT
Category
Last pushed
Feb 23, 2026
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/dormstern/leashed"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier