ZenGuard-AI/fast-llm-security-guardrails

The fastest Trust Layer for AI Agents

52
/ 100
Established

This project helps businesses ensure their AI agents are safe and secure for public use. It takes inputs like user prompts and AI responses, then checks them for security risks such as attempts to manipulate the AI or leak sensitive data. The output is a protected AI agent that can be trusted to handle interactions responsibly. It's designed for AI product managers, developers, and security professionals who deploy AI agents in production environments.

152 stars.

Use this if you are deploying AI agents or large language model (LLM) applications and need to protect them from prompt injections, data leakage, and inappropriate content generation.

Not ideal if you are working with AI models in a research-only capacity and do not need real-time, production-grade security for user interactions.

AI Agent security LLM guardrails data privacy content moderation AI risk management
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

152

Forks

21

Language

Python

License

MIT

Last pushed

Feb 03, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/ZenGuard-AI/fast-llm-security-guardrails"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.