pallma-ai/pallma-guard
PallmaAI delivers complete, lifecycle security for your AI agents, from proactive red teaming to real-time threat detection and automated remediation.
This project helps security and operations teams monitor the behavior of AI agents and LLM-powered applications. It takes traces of your AI's decision-making process and identifies potential threats like prompt injections or data leaks. The output is real-time threat detection and alerts, giving security engineers and AI application developers better control over their AI systems.
No commits in the last 6 months.
Use this if you need to ensure the security and safe operation of your AI agents and LLM applications by detecting threats in real-time.
Not ideal if you are looking for traditional network or application security for non-AI systems, or a black-box AI security solution.
Stars
11
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 08, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/pallma-ai/pallma-guard"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier