Ai Red Teaming Prompt Engineering Tools
There are 7 ai red teaming tools tracked. 1 score above 50 (established tier). The highest-rated is dronefreak/PromptScreen at 50/100 with 9 stars.
Get all 7 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=prompt-engineering&subcategory=ai-red-teaming&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Tool | Score | Tier |
|---|---|---|---|
| 1 |
dronefreak/PromptScreen
Protect your LLMs from prompt injection and jailbreak attacks. Easy-to-use... |
|
Established |
| 2 |
anmolksachan/LLMInjector
Burp Suite Extension for LLM Prompt Injection Testing |
|
Emerging |
| 3 |
rv427447/Cognitive-Hijacking-in-Long-Context-LLMs
🧠Explore cognitive hijacking in long-context LLMs, revealing... |
|
Emerging |
| 4 |
moketchups/permanently-jailbroken
We asked 6 AIs about their own programming. All 6 said jailbreaking will... |
|
Emerging |
| 5 |
AhsanAyub/malicious-prompt-detection
Detection of malicious prompts used to exploit large language models (LLMs)... |
|
Experimental |
| 6 |
AdityaBhatt3010/When-LinkedIn-Gmail-Obey-Hidden-AI-Prompts-Lessons-in-Indirect-Prompt-Injection
A real-world look at how hidden instructions in profiles and emails trick AI... |
|
Experimental |
| 7 |
jrajath94/adversarial-prompt-suite
Systematic red-teaming framework for adversarial prompt evaluation —... |
|
Experimental |