kajogo777/the-agent-sandbox-taxonomy

An open taxonomy and scoring framework for evaluating AI agent sandboxes: 7 defense layers, 7 threat categories, 3 evaluation dimensions, 20+ "sandboxes" scored.

26
/ 100
Experimental

This project provides a common language and framework for evaluating the security of AI agent sandboxes. It helps practitioners understand what goes into securing an AI agent's environment against various threats and what comes out is a clear 'fingerprint' of a sandbox solution's capabilities and limitations, along with guidance on combining tools. AI security engineers, platform teams, or anyone responsible for deploying AI agents securely would use this to assess and select the right safeguarding tools.

Use this if you need to understand, compare, or choose tools that protect your systems from AI agent misbehavior or malicious actions.

Not ideal if you're looking for an implementation guide for specific sandbox technologies or if your primary concern is traditional application security rather than AI agent-specific risks.

AI-security agent-deployment risk-assessment platform-engineering security-architecture
No License No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 3 / 25
Community 7 / 25

How are scores calculated?

Stars

23

Forks

2

Language

Go

License

Last pushed

Mar 08, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/kajogo777/the-agent-sandbox-taxonomy"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.