gendigitalinc/aarts
An Open Standard for AI Agent Runtime Safety (AARTS)
AI Agent Runtime Safety (AARTS) is a vendor-neutral standard designed to help organizations secure their AI agent environments. It provides a common framework for integrating safety measures across different AI agent systems, from initial deployment to ongoing operations. This standard is for anyone responsible for the secure and ethical deployment of AI agents within an organization, such as AI architects, security engineers, or compliance officers.
Use this if you need a standardized approach to defining and implementing safety and security measures for your AI agent systems, regardless of the underlying technology.
Not ideal if you are looking for a specific security product or tool rather than a foundational framework and specification.
Stars
12
Forks
—
Language
—
License
—
Category
Last pushed
Mar 05, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/gendigitalinc/aarts"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier