azdhril/Sentinel
🛡️ Zero-trust governance for AI agents. Intercept, approve, and audit LLM actions with one decorator. Fail-secure by default.
This helps organizations prevent AI agents from making unauthorized or risky real-world actions like transferring money or deleting records. It takes an agent's intended action and related context, then either allows it, blocks it, or routes it for human approval based on predefined rules or detected anomalies. This is for anyone deploying AI agents who needs to ensure their autonomous systems operate safely and within governance policies, such as operations managers, compliance officers, or finance controllers.
Use this if you need a safety net to ensure your AI agents don't perform unintended or high-risk actions without human oversight.
Not ideal if your AI agents only perform low-risk, internal data analysis that doesn't interact with external systems or sensitive operations.
Stars
8
Forks
1
Language
Python
License
MIT
Category
Last pushed
Jan 25, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/azdhril/Sentinel"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
TouHouQing/DataSentry
🛡️An AI-powered data governance agent platform. Supports Real-time Interception & Database Batch...
asaotomo/DeepSentry
DeepSentry(深哨) 一款 AI 驱动的新一代安全运维代理。支持本地/SSH 混合执行、动态风控与自动化审计报告生成。(集成 DeepSeek/OpenAI/Ollam/LM Stuido)
Raunplaymore/sentinel
Sentinel watches what your AI agents actually do — file access, network calls, risky commands —...
subcode-labs/sentinel
Secret Management for AI Agents
valeo-cash/Sentinel
Enterprise audit, compliance & budget enforcement layer for the x402 payment protocol