azdhril/Sentinel

🛡️ Zero-trust governance for AI agents. Intercept, approve, and audit LLM actions with one decorator. Fail-secure by default.

33
/ 100
Emerging

This helps organizations prevent AI agents from making unauthorized or risky real-world actions like transferring money or deleting records. It takes an agent's intended action and related context, then either allows it, blocks it, or routes it for human approval based on predefined rules or detected anomalies. This is for anyone deploying AI agents who needs to ensure their autonomous systems operate safely and within governance policies, such as operations managers, compliance officers, or finance controllers.

Use this if you need a safety net to ensure your AI agents don't perform unintended or high-risk actions without human oversight.

Not ideal if your AI agents only perform low-risk, internal data analysis that doesn't interact with external systems or sensitive operations.

AI-governance agent-safety compliance-automation workflow-approval risk-management
No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 11 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Python

License

MIT

Last pushed

Jan 25, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/azdhril/Sentinel"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.