Ai Red Teaming Prompt Engineering Tools

There are 7 ai red teaming tools tracked. 1 score above 50 (established tier). The highest-rated is dronefreak/PromptScreen at 50/100 with 9 stars.

Get all 7 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=prompt-engineering&subcategory=ai-red-teaming&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Tool Score Tier
1 dronefreak/PromptScreen

Protect your LLMs from prompt injection and jailbreak attacks. Easy-to-use...

50
Established
2 anmolksachan/LLMInjector

Burp Suite Extension for LLM Prompt Injection Testing

39
Emerging
3 rv427447/Cognitive-Hijacking-in-Long-Context-LLMs

🧠 Explore cognitive hijacking in long-context LLMs, revealing...

36
Emerging
4 moketchups/permanently-jailbroken

We asked 6 AIs about their own programming. All 6 said jailbreaking will...

32
Emerging
5 AhsanAyub/malicious-prompt-detection

Detection of malicious prompts used to exploit large language models (LLMs)...

29
Experimental
6 AdityaBhatt3010/When-LinkedIn-Gmail-Obey-Hidden-AI-Prompts-Lessons-in-Indirect-Prompt-Injection

A real-world look at how hidden instructions in profiles and emails trick AI...

22
Experimental
7 jrajath94/adversarial-prompt-suite

Systematic red-teaming framework for adversarial prompt evaluation —...

22
Experimental