agentseal and agentshield
These are **competitors** offering overlapping AI agent security scanning capabilities—both detect vulnerabilities in agent configurations and MCP tool permissions, though AgentSeal emphasizes supply chain attacks and prompt injection testing while AgentShield focuses on broader configuration auditing across multiple deployment formats (CLI, GitHub Action, etc.).
About agentseal
AgentSeal/agentseal
Security toolkit for AI agents. Scan your machine for dangerous skills and MCP configs, monitor for supply chain attacks, test prompt injection resistance, and audit live MCP servers for tool poisoning.
This tool helps AI engineers and security professionals keep their AI agents safe from attacks. It scans your machine for dangerous configurations, monitors for malicious updates to agent skills and tool descriptions, and tests your AI agent's system prompts against known adversarial attacks. You input agent configurations, live MCP servers, or system prompts, and it outputs a security report with a trust score and details on detected threats.
About agentshield
affaan-m/agentshield
AI agent security scanner. Detect vulnerabilities in agent configurations, MCP servers, and tool permissions. Available as CLI, GitHub Action, ECC plugin, and GitHub App integration. 🛡️
This tool helps AI agent developers, especially those working with Claude Code, identify and fix security flaws in their agent configurations. It takes your agent's configuration files (like those in your `.claude/` directory) and produces a detailed security report, highlighting issues like hardcoded secrets, dangerous permissions, and risky hook setups. It's designed for developers who build, deploy, or manage AI agents and want to ensure their setups are secure before they go live.
Scores updated daily from GitHub, PyPI, and npm data. How scores work