Szesnasty/ai-protector
Self-hosted LLM firewall and agent guardrails that block prompt injection, redact PII, enforce RBAC, and secure tool calls.
AI Protector helps product teams deploy AI agents safely, without worrying about security breaches or misuse. It acts as a firewall for your AI agents, checking both what goes in (user requests) and what comes out (agent actions). This tool is for product managers and engineering leaders responsible for developing and deploying AI-powered applications that interact with internal tools or customer data.
Use this if you are building AI agents that call tools like deleting users, issuing refunds, or querying databases, and you need to prevent prompt injection, unauthorized actions, or data leaks.
Not ideal if you only need basic content moderation for simple chatbots without any tool access.
Stars
11
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/Szesnasty/ai-protector"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier