microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and reliability engineering for autonomous AI agents. Covers 10/10 OWASP Agentic Top 10.
This toolkit helps organizations manage the risks of deploying AI agents by enforcing security policies and controlling what agents can do, not just what they say. It takes in information about your AI agents and their intended actions, and outputs a secure, controlled execution environment. This is for AI solution architects, security engineers, and compliance officers who deploy and manage AI agents in production.
Available on PyPI.
Use this if you are developing or deploying AI agents and need to ensure they operate securely, comply with policies, and can't take unauthorized or dangerous actions.
Not ideal if your primary concern is filtering the content of what an AI model says or does, rather than governing the actions and resources that an AI agent can access.
Stars
47
Forks
11
Language
Python
License
MIT
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Dependencies
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/microsoft/agent-governance-toolkit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related agents
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier
project-codeguard/rules
Project CodeGuard is an AI model-agnostic security framework and ruleset that embeds...