Szesnasty/ai-protector

Self-hosted LLM firewall and agent guardrails that block prompt injection, redact PII, enforce RBAC, and secure tool calls.

26
/ 100
Experimental

AI Protector helps product teams deploy AI agents safely, without worrying about security breaches or misuse. It acts as a firewall for your AI agents, checking both what goes in (user requests) and what comes out (agent actions). This tool is for product managers and engineering leaders responsible for developing and deploying AI-powered applications that interact with internal tools or customer data.

Use this if you are building AI agents that call tools like deleting users, issuing refunds, or querying databases, and you need to prevent prompt injection, unauthorized actions, or data leaks.

Not ideal if you only need basic content moderation for simple chatbots without any tool access.

AI-agent-security prompt-injection-prevention data-privacy access-control AI-application-development
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 11 / 25
Community 0 / 25

How are scores calculated?

Stars

11

Forks

Language

Python

License

Apache-2.0

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/Szesnasty/ai-protector"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.