liu00222/Open-Prompt-Injection
This repository provides a benchmark for prompt injection attacks and defenses in LLMs
This toolkit helps evaluate and implement defenses against 'prompt injection' attacks on applications built with large language models (LLMs). It takes an LLM, a target task (like sentiment analysis), and potential injected instructions, then measures how well the LLM resists or identifies these malicious prompts. Anyone building or managing LLM-powered applications who needs to ensure their AI models behave as intended, without being hijacked by unexpected user input, would use this.
406 stars.
Use this if you are developing or securing an application that uses a large language model and you need to test its resilience against malicious or unintended instructions hidden within user inputs.
Not ideal if you are a general user simply interacting with an LLM and are not involved in the development or security testing of LLM-integrated applications.
Stars
406
Forks
64
Language
Python
License
MIT
Category
Last pushed
Oct 29, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/liu00222/Open-Prompt-Injection"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
lakeraai/pint-benchmark
A benchmark for prompt injection detection systems.
R3dShad0w7/PromptMe
PromptMe is an educational project that showcases security vulnerabilities in large language...
cybozu/prompt-hardener
Prompt Hardener analyzes prompt-injection-originated risk in LLM-based agents and applications.
StavC/Here-Comes-the-AI-Worm
Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts...
mthamil107/prompt-shield
Self-learning prompt injection detection engine that gets smarter with every attack — 21...