liu00222/Open-Prompt-Injection

This repository provides a benchmark for prompt injection attacks and defenses in LLMs

53
/ 100
Established

This toolkit helps evaluate and implement defenses against 'prompt injection' attacks on applications built with large language models (LLMs). It takes an LLM, a target task (like sentiment analysis), and potential injected instructions, then measures how well the LLM resists or identifies these malicious prompts. Anyone building or managing LLM-powered applications who needs to ensure their AI models behave as intended, without being hijacked by unexpected user input, would use this.

406 stars.

Use this if you are developing or securing an application that uses a large language model and you need to test its resilience against malicious or unintended instructions hidden within user inputs.

Not ideal if you are a general user simply interacting with an LLM and are not involved in the development or security testing of LLM-integrated applications.

LLM security AI application development prompt engineering model risk management cybersecurity
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

406

Forks

64

Language

Python

License

MIT

Last pushed

Oct 29, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/liu00222/Open-Prompt-Injection"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.