edictum-ai/edictum

Runtime governance for AI agents. Contracts enforce what tools can do — before they execute.

37
/ 100
Emerging

This helps AI agent developers enforce strict safety rules on what their AI agents can do. It takes in predefined rules, written in a clear YAML format, about allowed or forbidden actions for an agent. It outputs a clear decision: either the agent's proposed action is allowed to proceed, or it is blocked with a specific reason, preventing unwanted or risky operations before they happen. Developers and AI operations teams who build and deploy AI agents would use this.

Use this if you need to reliably prevent AI agents from performing unintended or unsafe actions by enforcing explicit rules at the moment the agent tries to use a tool.

Not ideal if your primary concern is merely suggesting agent behavior through prompts, rather than strict, non-negotiable enforcement.

AI-safety Agent-governance AI-operations Compliance-enforcement Autonomous-system-guardrails
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 11 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

Python

License

MIT

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/edictum-ai/edictum"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.