edictum-ai/edictum
Runtime governance for AI agents. Contracts enforce what tools can do — before they execute.
This helps AI agent developers enforce strict safety rules on what their AI agents can do. It takes in predefined rules, written in a clear YAML format, about allowed or forbidden actions for an agent. It outputs a clear decision: either the agent's proposed action is allowed to proceed, or it is blocked with a specific reason, preventing unwanted or risky operations before they happen. Developers and AI operations teams who build and deploy AI agents would use this.
Use this if you need to reliably prevent AI agents from performing unintended or unsafe actions by enforcing explicit rules at the moment the agent tries to use a tool.
Not ideal if your primary concern is merely suggesting agent behavior through prompts, rather than strict, non-negotiable enforcement.
Stars
13
Forks
2
Language
Python
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/edictum-ai/edictum"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier