aporthq/aport-agent-guardrails
Pre-action authorization guardrails for AI agents - Works with OpenClaw, Claude Code, LandChain, CrewAI and others
This project helps engineering and product teams ensure their AI agents operate safely and within defined permissions. It takes an agent's intended action as input and checks it against a security policy before execution, preventing unauthorized or risky operations. The end-users are AI developers, security engineers, and product managers responsible for deploying and managing AI agents.
Use this if you need to prevent AI agents from performing unintended or harmful actions, especially in sensitive environments or when agents have access to external tools.
Not ideal if you are looking for a general-purpose AI agent framework or if your agents do not interact with external tools or sensitive data.
Stars
15
Forks
2
Language
Shell
License
—
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/aporthq/aport-agent-guardrails"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier