anmolksachan/LLMInjector

Burp Suite Extension for LLM Prompt Injection Testing

39
/ 100
Emerging

This tool helps security engineers and penetration testers automatically find prompt injection vulnerabilities in applications that use Large Language Models (LLMs). You provide an HTTP request to an LLM-backed API, and it injects various malicious prompts, then analyzes the LLM's responses to identify potential security flaws. The output is a clear report showing where vulnerabilities exist.

Use this if you are a security professional needing to thoroughly test the resilience of your LLM-integrated applications against prompt injection attacks, especially for OpenAI-compatible, Anthropic, Ollama, LocalAI, or custom LLM backends.

Not ideal if you are looking for a general-purpose web vulnerability scanner that doesn't specialize in LLM-specific threats, or if you are not familiar with web proxy tools like Burp Suite.

penetration-testing application-security LLM-security vulnerability-assessment web-application-testing
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 11 / 25
Community 12 / 25

How are scores calculated?

Stars

20

Forks

3

Language

Python

License

Apache-2.0

Category

ai-red-teaming

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/anmolksachan/LLMInjector"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.