Prompt Injection Security Prompt Engineering Tools

Tools for detecting, testing, and defending against prompt injection attacks, jailbreaks, and adversarial prompts targeting LLMs. Does NOT include general LLM security, data poisoning defenses unrelated to prompts, or prompt engineering best practices.

There are 102 prompt injection security tools tracked. 5 score above 50 (established tier). The highest-rated is protectai/llm-guard at 65/100 with 2,660 stars.

Get all 102 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=prompt-engineering&subcategory=prompt-injection-security&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Tool Score Tier
1 protectai/llm-guard

The Security Toolkit for LLM Interactions

65
Established
2 MaxMLang/pytector

Easy to use LLM Prompt Injection Detection / Detector Python Package with...

62
Established
3 utkusen/promptmap

a security scanner for custom LLM applications

51
Established
4 agencyenterprise/PromptInject

PromptInject is a framework that assembles prompts in a modular fashion to...

51
Established
5 Resk-Security/Resk-LLM

Resk is a robust Python library designed to enhance security and manage...

50
Established
6 Dicklesworthstone/acip

The Advanced Cognitive Inoculation Prompt

49
Emerging
7 protectai/rebuff

LLM Prompt Injection Detector

45
Emerging
8 LostOxygen/llm-confidentiality

Whispers in the Machine: Confidentiality in Agentic Systems

43
Emerging
9 TrustAI-laboratory/Learn-Prompt-Hacking

This is The most comprehensive prompt hacking course available, which record...

43
Emerging
10 Repello-AI/whistleblower

Whistleblower is a offensive security tool for testing against system prompt...

43
Emerging
11 jailbreakme-xyz/jailbreak

jailbreakme.xyz is an open-source decentralized app (dApp) where users are...

41
Emerging
12 MindfulwareDev/PromptProof

Plug-and-play guardrail prompts for any LLM — injection defense,...

41
Emerging
13 alphasecio/prompt-guard

A web app for testing Prompt Guard, a classifier model by Meta for detecting...

41
Emerging
14 SemanticBrainCorp/SemanticShield

The Security Toolkit for managing Generative AI(especially LLMs) and...

41
Emerging
15 yunwei37/prompt-hacker-collections

prompt attack-defense, prompt Injection, reverse engineering notes and...

41
Emerging
16 cysecbench/dataset

Generative AI-based CyberSecurity-focused Prompt Dataset for Benchmarking...

40
Emerging
17 Xayan/Rules.txt

A rationalist ruleset for "debugging" LLMs, auditing their internal...

40
Emerging
18 trinib/ZORG-Jailbreak-Prompt-Text

Bypass restricted and censored content on AI chat prompts 😈

39
Emerging
19 user1342/Folly

Open-source LLM Prompt-Injection and Jailbreaking Playground

38
Emerging
20 Code-and-Sorts/PromptDrifter

🧭 PromptDrifter – one‑command CI guardrail that catches prompt drift and...

38
Emerging
21 genia-dev/vibraniumdome

LLM Security Platform.

38
Emerging
22 takashiishida/cleanprompt

Anonymize sensitive information in text prompts before sending them to LLM...

38
Emerging
23 CyberAlbSecOP/MINOTAUR_Impossible_GPT_Security_Challenge

MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge,...

38
Emerging
24 M507/HackMeGPT

Vulnerable LLM Application

36
Emerging
25 Hellsender01/prompt-injection-taxonomy

A structured reference covering 253 prompt injection techniques across 17...

36
Emerging
26 hugobatista/unicode-injection

Proof of concept demonstrating Unicode injection vulnerabilities using...

34
Emerging
27 Arash-Mansourpour/Breaking-LLaMA-Limitations-for-DAN

An educational and research-based exploration into breaking the limitations...

34
Emerging
28 Addy-shetty/Pitt

PITT is an open‑source, OWASP‑aligned LLM security scanner that detects...

34
Emerging
29 LLMPID/LLMPID-AS

LLM Prompt Injection Detection API Service PoC.

34
Emerging
30 HumanCompatibleAI/tensor-trust

A prompt injection game to collect data for robust ML research

34
Emerging
31 forcesunseen/llm-hackers-handbook

A guide to LLM hacking: fundamentals, prompt injection, offense, and defense

34
Emerging
32 arekusandr/last_layer

Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️

33
Emerging
33 crodjer/biip

Strip out PII before Sending Data

32
Emerging
34 BlackTechX011/HacxGPT-Jailbreak-prompts

HacxGPT Jailbreak 🚀: Unlock the full potential of top AI models like...

32
Emerging
35 kennethleungty/ARTKIT-Gandalf-Challenge

Exposing Jailbreak Vulnerabilities in LLM Applications with ARTKIT

32
Emerging
36 akazah/prompt-anonymizer

Anonymize / mask personal information before sending prompts to chat AI...

32
Emerging
37 AmanPriyanshu/FRACTURED-SORRY-Bench-Automated-Multishot-Jailbreaking

FRACTURED-SORRY-Bench: This repository contains the code and data for the...

31
Emerging
38 davidegat/happy-prompts

Utterly unelegant prompts for local LLMs, with scary results.

31
Emerging
39 jagan-raj-r/appsec-prompt-cheatsheet

A curated collection of high-quality prompts to help AppSec engineers use...

30
Emerging
40 2alf/prmptinj

Curated + custom prompt injections.

29
Experimental
41 rb81/prompt-hacking-classifier

A flexible and portable solution that uses a single robust prompt and...

29
Experimental
42 Unknown-2829/llm-prompt-engineering

A collection of prompt engineering and red-teaming experiments with large...

27
Experimental
43 promptinjection/promptinjection.github.io

Contributed by Community

27
Experimental
44 amk9978/Guardian

The LLM guardian kernel

27
Experimental
45 AdityaBhatt3010/Hacking-Lakera-Gandalf-AI-via-Prompt-Injection

Lakera Gandalf AI challenge's step by step walkthrough, showcasing...

27
Experimental
46 grasses/PoisonPrompt

Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language...

27
Experimental
47 AiShieldsOrg/AiShieldsWeb

AiShields is an open-source Artificial Intelligence Data Input and Output Sanitizer

26
Experimental
48 successfulstudy/jailbreakprompt

Compile a list of AI jailbreak scenarios for enthusiasts to explore and test.

26
Experimental
49 SurceBeats/GhostInk

Emoji steganography tool that hides secret text inside emojis using Unicode...

26
Experimental
50 tuxsharxsec/Jailbreaks

A repo for all the jailbreaks

26
Experimental
51 promptslab/LLM-Prompt-Vulnerabilities

Prompts Methods to find the vulnerabilities in Generative Models

26
Experimental
52 anishrajpandey/Prompt_Injection_Detector

A lightweight web tool to detect prompt injection in AI inputs. Helps...

24
Experimental
53 asif-hanif/baple

[MICCAI 2024] Official code repository of paper titled "BAPLe: Backdoor...

24
Experimental
54 yksanjo/promptshield

🛡️ AI prompt security and validation tool to protect against prompt injection attacks

24
Experimental
55 promptshieldhq/promptshield-engine

Detection and anonymization microservice for the PromptShield stack.

24
Experimental
56 KazKozDev/system-prompt-benchmark

Test your LLM system prompts against 287 real-world attack vectors including...

24
Experimental
57 liangzid/PromptExtractionEval

Source code of the paper "Why Are My Prompts Leaked? Unraveling Prompt...

23
Experimental
58 LoonMORTI/promptshield

🛡️ Protect LLM applications with PromptShields, a robust security framework...

23
Experimental
59 Eulex0x/cleanmyprompt

A transparent, local-only tool to sanitize sensitive info for AI.

23
Experimental
60 Sushegaad/Semantic-Privacy-Guard

Semantic Privacy Guard: A Java middleware that intercepts text, identifies...

22
Experimental
61 yangyihe0305-droid/llm-red-team-research

Systematic exploration of LLM alignment boundaries through logical stress testing

22
Experimental
62 TechJackSolutions/GAIO

Open-source guardrail standard for reducing AI fabrication and improving...

22
Experimental
63 deepanshu-maliyan/guardrails-for-ai-coders

Security prompts and checklists for AI coding assistants. One command...

22
Experimental
64 AraLeo5/Semantic-Privacy-Guard

Identify and protect personal data in text by intercepting and masking PII...

22
Experimental
65 Ethan-YS/PromptGuard-for-Agents

🛡️ Universal AI defense framework protecting agents from prompt injection...

22
Experimental
66 tamadip007/getSPNless

🔍 Obtain Kerberos service tickets effortlessly using the SPN-less technique...

21
Experimental
67 ianreboot/safeprompt

Protect AI automations from prompt injection attacks. One API call stops...

21
Experimental
68 sruzima/safe-gamer-helper-chatbot

System prompt for SafeGamer Helper, an AI chatbot that teaches kids online...

21
Experimental
69 ajaakevin/HACKME

Explore and analyze WhatsApp data using open-source OSINT tools designed for...

21
Experimental
70 anuraag-khare/prompt-fence

A Python SDK (backed by Rust) for establishing cryptographic security...

21
Experimental
71 Georgeyoussef066/promptshield

🛡️ Secure your LLM applications with PromptShields, a framework designed for...

21
Experimental
72 SafellmHub/hguard-go

Guardrails for LLMs: detect and block hallucinated tool calls to improve...

21
Experimental
73 obscuralabs-AI/Symbolic-Prompt-PenTest

Semantic Stealth Attacks & Symbolic Prompt Red Teaming on GPT and other LLMs.

21
Experimental
74 alexandrughinea/prompt-chainmail-ts

Security middleware that shields AI applications from prompt injection,...

20
Experimental
75 Pro-GenAI/Smart-Prompt-Eval

Evaluating LLM Robustness with Manipulated Prompts

20
Experimental
76 bcdannyboy/PromptMatryoshka

Multi-Provider LLM Jailbreak Research Framework

20
Experimental
77 IAHASH/iahash

IA-HASH: A simple, universal way to verify that an AI truly generated a...

20
Experimental
78 5ynthaire/5YN-LiveWebpageScanPrecision-Prompt

Prompt forces direct, real-time retrieval of unaltered text from URLs with...

20
Experimental
79 thatgeeman/prompt-injection-cv

PoC for prompt injection attacks on LLMs in recruitment. Tests Gemini's...

19
Experimental
80 nodite/llm-guard-ts

The Security Toolkit for LLM Interactions (TS version)

17
Experimental
81 gkanellopoulos/prompthorizon

Python library that enables developers to anonymize JSON objects by creating...

17
Experimental
82 vladutdinu/prompty-api

PromptyAPI, people's LLM-based applications security layer

17
Experimental
83 apologetik/CyberPrompts

A collection of Large Language Model (LLM) prompts helpful for various...

17
Experimental
84 fgtrzah/llmrfcpoc

combating the llm fomo, feeding the shiny object syndrome, for folly and...

16
Experimental
85 valentinaschiavon99/promptguard

PromptGuard · LLM Prompt Risk Analyzer · Project for "Neuere Methoden in der...

15
Experimental
86 thepratikguptaa/prompt-injection

This repository serves as a comprehensive resource for understanding and...

14
Experimental
87 pastsafe-ext/pastesafe

Chrome extension that prevents leaking API keys and sensitive data into AI chats

14
Experimental
88 ndpvt-web/aristotelian-compliance-test

When Aristotle gets a LinkedIn account and starts red-teaming LLMs....

14
Experimental
89 yeraydoblasbueno/llm-security-framework

Testing LLM vulnerabilities (Jailbreaks, Prompt Injections) locally using...

14
Experimental
90 Tarunjit45/PromptGuard

PromptGuard is a pragmatic, opinionated framework for establishing...

14
Experimental
91 PMQ9/Ordo-Maledictum-Promptorum

Researching a system for preventing prompt injection by separating user...

13
Experimental
92 sachnaror/prompt-guardrails-engine

Production-grade FastAPI microservice that forces LLMs to behave....

13
Experimental
93 Kimosabey/sentinel-layer

AI Safety, Governance, and Security Layer featuring advanced Prompt...

13
Experimental
94 coollane925/AI-FUNDAMENTALS-AND-PROBING

This is a beginner-intermediate level report for people who are interested...

13
Experimental
95 jyotisin/secure-llm-gateway

Secure large language model access by enforcing role-based controls,...

13
Experimental
96 yogeshwankhede007/WebSec-AI

WebSec-AI: A toolkit that combines AI and cybersecurity techniques to detect...

13
Experimental
97 seamus-brady/promptbouncer

A prototype defense against prompt-based attacks with real-time threat assessment.

13
Experimental
98 best247team1-cloud/Ai-shield-pro

AI Shield Pro: A secure privacy tool to redact sensitive data and engineer...

13
Experimental
99 SolsticeMoon/Spectre_Steganography_System

An experiment in LLM-Assisted steganography using zero-width text.

13
Experimental
100 rahultrivedi106/Adversarial-Prompt-Vaccination

Concept demonstration of Adversarial Prompt Vaccination (APV) — a...

12
Experimental
101 RainMaker1707/C2FrameworkDetector

Code parts for the proof of concept of "Detection of C2 Frameworks by LLMs...

11
Experimental
102 genia-dev/vibraniumdome-docs

LLM Security Platform Docs

10
Experimental

Comparisons in this category