kahalewai/agent-policy-engine
Agent Policy Engine is an AI agent enforcement runtime (PEP) that prevents untrusted data from becoming executable authority in AI agents
This helps operations engineers and security architects prevent AI agents from performing unintended or unsafe actions in production. It takes an AI agent's proposed actions and a set of predefined security policies as input, then outputs a decision on whether the agent is authorized to proceed with that action. This ensures that even if an AI model generates a malicious or unintended command, the system will block it, providing a crucial security layer for enterprise AI systems and safety-critical workflows.
Use this if you need to run AI agents in production and require strict, deterministic security controls to prevent unauthorized or unintended actions.
Not ideal if your AI agent applications do not involve sensitive data, external system interactions, or critical production environments where security breaches could have severe consequences.
Stars
7
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 23, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/kahalewai/agent-policy-engine"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier