ZenGuard-AI/fast-llm-security-guardrails
The fastest Trust Layer for AI Agents
This project helps businesses ensure their AI agents are safe and secure for public use. It takes inputs like user prompts and AI responses, then checks them for security risks such as attempts to manipulate the AI or leak sensitive data. The output is a protected AI agent that can be trusted to handle interactions responsibly. It's designed for AI product managers, developers, and security professionals who deploy AI agents in production environments.
152 stars.
Use this if you are deploying AI agents or large language model (LLM) applications and need to protect them from prompt injections, data leakage, and inappropriate content generation.
Not ideal if you are working with AI models in a research-only capacity and do not need real-time, production-grade security for user interactions.
Stars
152
Forks
21
Language
Python
License
MIT
Category
Last pushed
Feb 03, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/ZenGuard-AI/fast-llm-security-guardrails"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related agents
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier