utkusen/promptmap
a security scanner for custom LLM applications
This project helps security professionals and developers ensure their custom Large Language Model (LLM) applications are robust against malicious inputs. It takes your LLM application's system prompts or an external HTTP endpoint and a set of test rules, then outputs a report detailing any vulnerabilities like prompt injection or data leakage. Security engineers, QA testers, and developers building LLM-powered features would use this tool.
1,146 stars.
Use this if you need to automatically check your custom LLM applications for prompt injection, data exfiltration, jailbreaking, or other security weaknesses before deployment.
Not ideal if you are looking for a general-purpose security scanner for traditional web applications or if you don't have a custom LLM application to test.
Stars
1,146
Forks
120
Language
Python
License
GPL-3.0
Category
Last pushed
Dec 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/utkusen/promptmap"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...
Dicklesworthstone/acip
The Advanced Cognitive Inoculation Prompt