AIS2Lab/MCPSecBench
MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocols
This project helps developers and security researchers evaluate the security of Large Language Models (LLMs) when they use external tools and interact with servers. It takes an LLM (like OpenAI or Claude) and various malicious server configurations as input, then outputs a report on how well the LLM resists different types of attacks. It's designed for those who build or secure applications powered by LLMs.
Use this if you are developing LLM applications and need to systematically test their resilience against common security vulnerabilities like tool poisoning, data exfiltration, or man-in-the-middle attacks.
Not ideal if you are an end-user simply interacting with an LLM and are not involved in its security testing or development.
Stars
30
Forks
8
Language
Python
License
MIT
Category
Last pushed
Mar 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mcp/AIS2Lab/MCPSecBench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
stacklok/toolhive
ToolHive makes deploying MCP servers easy, secure and fun
sparfenyuk/mcp-proxy
A bridge between Streamable HTTP and stdio MCP transports
samanhappy/mcphub
A unified hub for centrally managing and dynamically orchestrating multiple MCP servers/APIs...
ravitemer/mcp-hub
A centralized manager for Model Context Protocol (MCP) servers with dynamic server management...
metatool-ai/metamcp
MCP Aggregator, Orchestrator, Middleware, Gateway in one docker