prompt-guard and prompt-shield
About prompt-guard
seojoonkim/prompt-guard
Advanced prompt injection defense system for AI agents. Multi-language detection, severity scoring, and security auditing.
This project helps protect your AI agents and large language model (LLM) applications from being manipulated or leaking sensitive information. It takes user input or AI-generated responses and identifies attempts to bypass safety rules or extract confidential data like API keys. Security engineers, AI product managers, or anyone deploying an AI assistant would use this to ensure their AI behaves as intended and doesn't reveal secrets.
About prompt-shield
Milbaxter/prompt-shield
AI agent security oracle. Scan any message for prompt injections. Pay with crypto. No accounts. No logs. Built for OpenClaw/Clawdbot agents.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work