mattijsmoens/sovereign-shield
AI security framework: tamper-proof action auditing, prompt injection firewall, ethical guardrails, DDoS protection, and self-improving adaptive filters. Zero dependencies, deterministic, hash-sealed integrity verification. Patent Pending.
This is a robust defense system for AI applications that process user inputs, protecting against malicious attacks like prompt injections, jailbreaks, and data exfiltration. It takes any user input to your AI system and outputs a clear 'safe' or 'blocked' decision, preventing harmful interactions. AI product managers, security engineers, and developers building user-facing AI tools would use this to ensure their applications are secure and reliable.
Available on PyPI.
Use this if you need to protect your AI application from adversarial inputs and ensure it operates ethically and securely, especially when handling untrusted user-generated content.
Not ideal if your application requires role-playing, creative writing, or hypothetical reasoning from the AI, as the default strict security rules may block legitimate inputs unless specifically configured with exceptions.
Stars
15
Forks
3
Language
Python
License
—
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/mattijsmoens/sovereign-shield"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related agents
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier