utkusen/promptmap

a security scanner for custom LLM applications

51
/ 100
Established

This project helps security professionals and developers ensure their custom Large Language Model (LLM) applications are robust against malicious inputs. It takes your LLM application's system prompts or an external HTTP endpoint and a set of test rules, then outputs a report detailing any vulnerabilities like prompt injection or data leakage. Security engineers, QA testers, and developers building LLM-powered features would use this tool.

1,146 stars.

Use this if you need to automatically check your custom LLM applications for prompt injection, data exfiltration, jailbreaking, or other security weaknesses before deployment.

Not ideal if you are looking for a general-purpose security scanner for traditional web applications or if you don't have a custom LLM application to test.

LLM-security prompt-injection application-security AI-safety vulnerability-testing
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

1,146

Forks

120

Language

Python

License

GPL-3.0

Last pushed

Dec 01, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/utkusen/promptmap"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.