HeadyZhang/agent-audit
Static security scanner for LLM agents — prompt injection, MCP config auditing, taint analysis. 49 rules mapped to OWASP Agentic Top 10 (2026). Works with LangChain, CrewAI, AutoGen.
This tool helps AI agent developers, security engineers, and teams managing AI agent servers find security vulnerabilities in their AI agent code before deployment. It takes your agent's code, configuration files, and even live agent server configurations as input and produces a detailed security report, flagging issues like prompt injection, leaked secrets, and unsafe tool usage. This allows those building and securing AI systems to proactively identify and fix potential risks.
104 stars. Available on PyPI.
Use this if you are developing AI agents with frameworks like LangChain or AutoGen, are a security engineer reviewing AI agent code, or manage MCP servers and need to validate their security configuration.
Not ideal if you are working with traditional software applications or chatbots without agentic capabilities, as its focus is specifically on the unique security challenges of AI agents.
Stars
104
Forks
11
Language
Python
License
MIT
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Dependencies
6
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/HeadyZhang/agent-audit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related agents
Nebulock-Inc/agentic-threat-hunting-framework
ATHF is a framework for agentic threat hunting - building systems that can remember, learn, and...
AgentSeal/agentseal
Security toolkit for AI agents. Scan your machine for dangerous skills and MCP configs, monitor...
cosai-oasis/secure-ai-tooling
The CoSAI Risk Map is a framework for identifying, analyzing, and mitigating security risks in...
oasm-platform/open-asm
Open-source platform for cybersecurity Attack Surface Management (OASM).
LucidAkshay/kavach
Tactical AI Workspace Monitor & EDR