Prompt Injection Security Prompt Engineering Tools
Tools for detecting, testing, and defending against prompt injection attacks, jailbreaks, and adversarial prompts targeting LLMs. Does NOT include general LLM security, data poisoning defenses unrelated to prompts, or prompt engineering best practices.
There are 102 prompt injection security tools tracked. 5 score above 50 (established tier). The highest-rated is protectai/llm-guard at 65/100 with 2,660 stars.
Get all 102 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=prompt-engineering&subcategory=prompt-injection-security&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Tool | Score | Tier |
|---|---|---|---|
| 1 |
protectai/llm-guard
The Security Toolkit for LLM Interactions |
|
Established |
| 2 |
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with... |
|
Established |
| 3 |
utkusen/promptmap
a security scanner for custom LLM applications |
|
Established |
| 4 |
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to... |
|
Established |
| 5 |
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage... |
|
Established |
| 6 |
Dicklesworthstone/acip
The Advanced Cognitive Inoculation Prompt |
|
Emerging |
| 7 |
protectai/rebuff
LLM Prompt Injection Detector |
|
Emerging |
| 8 |
LostOxygen/llm-confidentiality
Whispers in the Machine: Confidentiality in Agentic Systems |
|
Emerging |
| 9 |
TrustAI-laboratory/Learn-Prompt-Hacking
This is The most comprehensive prompt hacking course available, which record... |
|
Emerging |
| 10 |
Repello-AI/whistleblower
Whistleblower is a offensive security tool for testing against system prompt... |
|
Emerging |
| 11 |
jailbreakme-xyz/jailbreak
jailbreakme.xyz is an open-source decentralized app (dApp) where users are... |
|
Emerging |
| 12 |
MindfulwareDev/PromptProof
Plug-and-play guardrail prompts for any LLM — injection defense,... |
|
Emerging |
| 13 |
alphasecio/prompt-guard
A web app for testing Prompt Guard, a classifier model by Meta for detecting... |
|
Emerging |
| 14 |
SemanticBrainCorp/SemanticShield
The Security Toolkit for managing Generative AI(especially LLMs) and... |
|
Emerging |
| 15 |
yunwei37/prompt-hacker-collections
prompt attack-defense, prompt Injection, reverse engineering notes and... |
|
Emerging |
| 16 |
cysecbench/dataset
Generative AI-based CyberSecurity-focused Prompt Dataset for Benchmarking... |
|
Emerging |
| 17 |
Xayan/Rules.txt
A rationalist ruleset for "debugging" LLMs, auditing their internal... |
|
Emerging |
| 18 |
trinib/ZORG-Jailbreak-Prompt-Text
Bypass restricted and censored content on AI chat prompts 😈 |
|
Emerging |
| 19 |
user1342/Folly
Open-source LLM Prompt-Injection and Jailbreaking Playground |
|
Emerging |
| 20 |
Code-and-Sorts/PromptDrifter
🧭 PromptDrifter – one‑command CI guardrail that catches prompt drift and... |
|
Emerging |
| 21 |
genia-dev/vibraniumdome
LLM Security Platform. |
|
Emerging |
| 22 |
takashiishida/cleanprompt
Anonymize sensitive information in text prompts before sending them to LLM... |
|
Emerging |
| 23 |
CyberAlbSecOP/MINOTAUR_Impossible_GPT_Security_Challenge
MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge,... |
|
Emerging |
| 24 |
M507/HackMeGPT
Vulnerable LLM Application |
|
Emerging |
| 25 |
Hellsender01/prompt-injection-taxonomy
A structured reference covering 253 prompt injection techniques across 17... |
|
Emerging |
| 26 |
hugobatista/unicode-injection
Proof of concept demonstrating Unicode injection vulnerabilities using... |
|
Emerging |
| 27 |
Arash-Mansourpour/Breaking-LLaMA-Limitations-for-DAN
An educational and research-based exploration into breaking the limitations... |
|
Emerging |
| 28 |
Addy-shetty/Pitt
PITT is an open‑source, OWASP‑aligned LLM security scanner that detects... |
|
Emerging |
| 29 |
LLMPID/LLMPID-AS
LLM Prompt Injection Detection API Service PoC. |
|
Emerging |
| 30 |
HumanCompatibleAI/tensor-trust
A prompt injection game to collect data for robust ML research |
|
Emerging |
| 31 |
forcesunseen/llm-hackers-handbook
A guide to LLM hacking: fundamentals, prompt injection, offense, and defense |
|
Emerging |
| 32 |
arekusandr/last_layer
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️ |
|
Emerging |
| 33 |
crodjer/biip
Strip out PII before Sending Data |
|
Emerging |
| 34 |
BlackTechX011/HacxGPT-Jailbreak-prompts
HacxGPT Jailbreak 🚀: Unlock the full potential of top AI models like... |
|
Emerging |
| 35 |
kennethleungty/ARTKIT-Gandalf-Challenge
Exposing Jailbreak Vulnerabilities in LLM Applications with ARTKIT |
|
Emerging |
| 36 |
akazah/prompt-anonymizer
Anonymize / mask personal information before sending prompts to chat AI... |
|
Emerging |
| 37 |
AmanPriyanshu/FRACTURED-SORRY-Bench-Automated-Multishot-Jailbreaking
FRACTURED-SORRY-Bench: This repository contains the code and data for the... |
|
Emerging |
| 38 |
davidegat/happy-prompts
Utterly unelegant prompts for local LLMs, with scary results. |
|
Emerging |
| 39 |
jagan-raj-r/appsec-prompt-cheatsheet
A curated collection of high-quality prompts to help AppSec engineers use... |
|
Emerging |
| 40 |
2alf/prmptinj
Curated + custom prompt injections. |
|
Experimental |
| 41 |
rb81/prompt-hacking-classifier
A flexible and portable solution that uses a single robust prompt and... |
|
Experimental |
| 42 |
Unknown-2829/llm-prompt-engineering
A collection of prompt engineering and red-teaming experiments with large... |
|
Experimental |
| 43 |
promptinjection/promptinjection.github.io
Contributed by Community |
|
Experimental |
| 44 |
amk9978/Guardian
The LLM guardian kernel |
|
Experimental |
| 45 |
AdityaBhatt3010/Hacking-Lakera-Gandalf-AI-via-Prompt-Injection
Lakera Gandalf AI challenge's step by step walkthrough, showcasing... |
|
Experimental |
| 46 |
grasses/PoisonPrompt
Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language... |
|
Experimental |
| 47 |
AiShieldsOrg/AiShieldsWeb
AiShields is an open-source Artificial Intelligence Data Input and Output Sanitizer |
|
Experimental |
| 48 |
successfulstudy/jailbreakprompt
Compile a list of AI jailbreak scenarios for enthusiasts to explore and test. |
|
Experimental |
| 49 |
SurceBeats/GhostInk
Emoji steganography tool that hides secret text inside emojis using Unicode... |
|
Experimental |
| 50 |
tuxsharxsec/Jailbreaks
A repo for all the jailbreaks |
|
Experimental |
| 51 |
promptslab/LLM-Prompt-Vulnerabilities
Prompts Methods to find the vulnerabilities in Generative Models |
|
Experimental |
| 52 |
anishrajpandey/Prompt_Injection_Detector
A lightweight web tool to detect prompt injection in AI inputs. Helps... |
|
Experimental |
| 53 |
asif-hanif/baple
[MICCAI 2024] Official code repository of paper titled "BAPLe: Backdoor... |
|
Experimental |
| 54 |
yksanjo/promptshield
🛡️ AI prompt security and validation tool to protect against prompt injection attacks |
|
Experimental |
| 55 |
promptshieldhq/promptshield-engine
Detection and anonymization microservice for the PromptShield stack. |
|
Experimental |
| 56 |
KazKozDev/system-prompt-benchmark
Test your LLM system prompts against 287 real-world attack vectors including... |
|
Experimental |
| 57 |
liangzid/PromptExtractionEval
Source code of the paper "Why Are My Prompts Leaked? Unraveling Prompt... |
|
Experimental |
| 58 |
LoonMORTI/promptshield
🛡️ Protect LLM applications with PromptShields, a robust security framework... |
|
Experimental |
| 59 |
Eulex0x/cleanmyprompt
A transparent, local-only tool to sanitize sensitive info for AI. |
|
Experimental |
| 60 |
Sushegaad/Semantic-Privacy-Guard
Semantic Privacy Guard: A Java middleware that intercepts text, identifies... |
|
Experimental |
| 61 |
yangyihe0305-droid/llm-red-team-research
Systematic exploration of LLM alignment boundaries through logical stress testing |
|
Experimental |
| 62 |
TechJackSolutions/GAIO
Open-source guardrail standard for reducing AI fabrication and improving... |
|
Experimental |
| 63 |
deepanshu-maliyan/guardrails-for-ai-coders
Security prompts and checklists for AI coding assistants. One command... |
|
Experimental |
| 64 |
AraLeo5/Semantic-Privacy-Guard
Identify and protect personal data in text by intercepting and masking PII... |
|
Experimental |
| 65 |
Ethan-YS/PromptGuard-for-Agents
🛡️ Universal AI defense framework protecting agents from prompt injection... |
|
Experimental |
| 66 |
tamadip007/getSPNless
🔍 Obtain Kerberos service tickets effortlessly using the SPN-less technique... |
|
Experimental |
| 67 |
ianreboot/safeprompt
Protect AI automations from prompt injection attacks. One API call stops... |
|
Experimental |
| 68 |
sruzima/safe-gamer-helper-chatbot
System prompt for SafeGamer Helper, an AI chatbot that teaches kids online... |
|
Experimental |
| 69 |
ajaakevin/HACKME
Explore and analyze WhatsApp data using open-source OSINT tools designed for... |
|
Experimental |
| 70 |
anuraag-khare/prompt-fence
A Python SDK (backed by Rust) for establishing cryptographic security... |
|
Experimental |
| 71 |
Georgeyoussef066/promptshield
🛡️ Secure your LLM applications with PromptShields, a framework designed for... |
|
Experimental |
| 72 |
SafellmHub/hguard-go
Guardrails for LLMs: detect and block hallucinated tool calls to improve... |
|
Experimental |
| 73 |
obscuralabs-AI/Symbolic-Prompt-PenTest
Semantic Stealth Attacks & Symbolic Prompt Red Teaming on GPT and other LLMs. |
|
Experimental |
| 74 |
alexandrughinea/prompt-chainmail-ts
Security middleware that shields AI applications from prompt injection,... |
|
Experimental |
| 75 |
Pro-GenAI/Smart-Prompt-Eval
Evaluating LLM Robustness with Manipulated Prompts |
|
Experimental |
| 76 |
bcdannyboy/PromptMatryoshka
Multi-Provider LLM Jailbreak Research Framework |
|
Experimental |
| 77 |
IAHASH/iahash
IA-HASH: A simple, universal way to verify that an AI truly generated a... |
|
Experimental |
| 78 |
5ynthaire/5YN-LiveWebpageScanPrecision-Prompt
Prompt forces direct, real-time retrieval of unaltered text from URLs with... |
|
Experimental |
| 79 |
thatgeeman/prompt-injection-cv
PoC for prompt injection attacks on LLMs in recruitment. Tests Gemini's... |
|
Experimental |
| 80 |
nodite/llm-guard-ts
The Security Toolkit for LLM Interactions (TS version) |
|
Experimental |
| 81 |
gkanellopoulos/prompthorizon
Python library that enables developers to anonymize JSON objects by creating... |
|
Experimental |
| 82 |
vladutdinu/prompty-api
PromptyAPI, people's LLM-based applications security layer |
|
Experimental |
| 83 |
apologetik/CyberPrompts
A collection of Large Language Model (LLM) prompts helpful for various... |
|
Experimental |
| 84 |
fgtrzah/llmrfcpoc
combating the llm fomo, feeding the shiny object syndrome, for folly and... |
|
Experimental |
| 85 |
valentinaschiavon99/promptguard
PromptGuard · LLM Prompt Risk Analyzer · Project for "Neuere Methoden in der... |
|
Experimental |
| 86 |
thepratikguptaa/prompt-injection
This repository serves as a comprehensive resource for understanding and... |
|
Experimental |
| 87 |
pastsafe-ext/pastesafe
Chrome extension that prevents leaking API keys and sensitive data into AI chats |
|
Experimental |
| 88 |
ndpvt-web/aristotelian-compliance-test
When Aristotle gets a LinkedIn account and starts red-teaming LLMs.... |
|
Experimental |
| 89 |
yeraydoblasbueno/llm-security-framework
Testing LLM vulnerabilities (Jailbreaks, Prompt Injections) locally using... |
|
Experimental |
| 90 |
Tarunjit45/PromptGuard
PromptGuard is a pragmatic, opinionated framework for establishing... |
|
Experimental |
| 91 |
PMQ9/Ordo-Maledictum-Promptorum
Researching a system for preventing prompt injection by separating user... |
|
Experimental |
| 92 |
sachnaror/prompt-guardrails-engine
Production-grade FastAPI microservice that forces LLMs to behave.... |
|
Experimental |
| 93 |
Kimosabey/sentinel-layer
AI Safety, Governance, and Security Layer featuring advanced Prompt... |
|
Experimental |
| 94 |
coollane925/AI-FUNDAMENTALS-AND-PROBING
This is a beginner-intermediate level report for people who are interested... |
|
Experimental |
| 95 |
jyotisin/secure-llm-gateway
Secure large language model access by enforcing role-based controls,... |
|
Experimental |
| 96 |
yogeshwankhede007/WebSec-AI
WebSec-AI: A toolkit that combines AI and cybersecurity techniques to detect... |
|
Experimental |
| 97 |
seamus-brady/promptbouncer
A prototype defense against prompt-based attacks with real-time threat assessment. |
|
Experimental |
| 98 |
best247team1-cloud/Ai-shield-pro
AI Shield Pro: A secure privacy tool to redact sensitive data and engineer... |
|
Experimental |
| 99 |
SolsticeMoon/Spectre_Steganography_System
An experiment in LLM-Assisted steganography using zero-width text. |
|
Experimental |
| 100 |
rahultrivedi106/Adversarial-Prompt-Vaccination
Concept demonstration of Adversarial Prompt Vaccination (APV) — a... |
|
Experimental |
| 101 |
RainMaker1707/C2FrameworkDetector
Code parts for the proof of concept of "Detection of C2 Frameworks by LLMs... |
|
Experimental |
| 102 |
genia-dev/vibraniumdome-docs
LLM Security Platform Docs |
|
Experimental |