Llm Firewall Defense Prompt Engineering Tools

There are 21 llm firewall defense tools tracked. 1 score above 50 (established tier). The highest-rated is liu00222/Open-Prompt-Injection at 53/100 with 406 stars.

Get all 21 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=prompt-engineering&subcategory=llm-firewall-defense&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Tool Score Tier
1 liu00222/Open-Prompt-Injection

This repository provides a benchmark for prompt injection attacks and...

53
Established
2 lakeraai/pint-benchmark

A benchmark for prompt injection detection systems.

47
Emerging
3 R3dShad0w7/PromptMe

PromptMe is an educational project that showcases security vulnerabilities...

47
Emerging
4 cybozu/prompt-hardener

Prompt Hardener analyzes prompt-injection-originated risk in LLM-based...

47
Emerging
5 StavC/Here-Comes-the-AI-Worm

Here Comes the AI Worm: Preventing the Propagation of Adversarial...

36
Emerging
6 mthamil107/prompt-shield

Self-learning prompt injection detection engine that gets smarter with every...

36
Emerging
7 mdombrov-33/go-promptguard

LLM prompt injection detection for Go applications

35
Emerging
8 grepstrength/WideOpenAI

Short list of indirect prompt injection attacks for OpenAI-based models.

33
Emerging
9 AdirD/prompt-security-node

🚀 Unofficial Node.js SDK for Prompt Security's Protection API.

29
Experimental
10 sleeepeer/PIArena

PIArena: A Platform for Prompt Injection Evaluation

27
Experimental
11 StavC/PromptWares

A Jailbroken GenAI Model Can Cause Real Harm: GenAI-powered Applications are...

26
Experimental
12 EvanZhouDev/apple-prompt-injection

A list of Apple Intelligence prompt injections.

22
Experimental
13 mrSamDev/llm-moat

TypeScript toolkit for prompt injection detection, sanitization, and LLM...

22
Experimental
14 kourgeorge/prompt-sentinel

Python library designed to protect sensitive data when interacting with...

21
Experimental
15 rohilrg/CatchPromptInjection

This repo focus on how to deal with prompt injection problem faced by LLMs

18
Experimental
16 dakshaladia/lost-in-the-middle-prompt-injection

Research study on context-window analysis of LLMs

17
Experimental
17 montanaflynn/AdversarialBench

Adversarial prompt-injection benchmark for LLMs

14
Experimental
18 satrijan/LLM-PROMPT-INJECTION-PAYLOAD-S

🛡️ Explore and test prompt injection techniques safely for AI applications,...

14
Experimental
19 ajutamangdev/PromptShield

PromptShield is an open-source LLM firewall intended to inspect prompts for...

14
Experimental
20 Kaynoux/xai-prompt-injections

Projektarbeit über Visualisierung der Token-Wichtigkeit zur Aufdeckung und...

12
Experimental
21 nedimcanulusoy/NeuroGuard

NeuroGuard is a dedicated project designed to detect prompt injections...

11
Experimental