anmolksachan/LLMInjector
Burp Suite Extension for LLM Prompt Injection Testing
This tool helps security engineers and penetration testers automatically find prompt injection vulnerabilities in applications that use Large Language Models (LLMs). You provide an HTTP request to an LLM-backed API, and it injects various malicious prompts, then analyzes the LLM's responses to identify potential security flaws. The output is a clear report showing where vulnerabilities exist.
Use this if you are a security professional needing to thoroughly test the resilience of your LLM-integrated applications against prompt injection attacks, especially for OpenAI-compatible, Anthropic, Ollama, LocalAI, or custom LLM backends.
Not ideal if you are looking for a general-purpose web vulnerability scanner that doesn't specialize in LLM-specific threats, or if you are not familiar with web proxy tools like Burp Suite.
Stars
20
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/anmolksachan/LLMInjector"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
dronefreak/PromptScreen
Protect your LLMs from prompt injection and jailbreak attacks. Easy-to-use Python package with...
rv427447/Cognitive-Hijacking-in-Long-Context-LLMs
🧠Explore cognitive hijacking in long-context LLMs, revealing vulnerabilities in prompt...
moketchups/permanently-jailbroken
We asked 6 AIs about their own programming. All 6 said jailbreaking will never be fixed. Run it...
AhsanAyub/malicious-prompt-detection
Detection of malicious prompts used to exploit large language models (LLMs) by leveraging...
AdityaBhatt3010/When-LinkedIn-Gmail-Obey-Hidden-AI-Prompts-Lessons-in-Indirect-Prompt-Injection
A real-world look at how hidden instructions in profiles and emails trick AI into unexpected...