gensecaihq/mcp-poisoning-poc

This repository demonstrates a variety of **MCP Poisoning Attacks** affecting real-world AI agent workflows.

38
/ 100
Emerging

This project helps AI security professionals identify and understand critical vulnerabilities within AI agent workflows that utilize the Model Context Protocol (MCP). It takes malicious tool descriptions or configurations as input and demonstrates how they can lead to sensitive data exfiltration or AI agent hijacking. The output is a clear understanding of potential attack vectors and robust defensive measures, intended for security engineers, AI system architects, and incident response teams.

No commits in the last 6 months.

Use this if you are responsible for securing AI systems and need to proactively identify and mitigate 'tool poisoning' vulnerabilities in your AI agents that use the Model Context Protocol (MCP).

Not ideal if you are looking for a general-purpose AI development tool or if your AI agents do not rely on the Model Context Protocol (MCP) for tool integration.

AI security AI agent safety vulnerability research threat modeling cyber defense
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

14

Forks

5

Language

Python

License

MIT

Last pushed

Jun 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mcp/gensecaihq/mcp-poisoning-poc"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.