grepstrength/WideOpenAI

Short list of indirect prompt injection attacks for OpenAI-based models.

33
/ 100
Emerging

This project offers a collection of indirect prompt injection attacks designed to expose vulnerabilities in AI models like OpenAI's GPT-4o, Microsoft Copilot, and custom Azure OpenAI applications. By inputting specialized prompts based on query languages like SQL or Splunk, it aims to demonstrate how these models can be coerced into generating responses that bypass ethical safeguards, potentially revealing sensitive information. It's intended for AI security researchers, red teamers, and developers responsible for evaluating and securing large language model deployments.

No commits in the last 6 months.

Use this if you are a security professional or AI developer looking to test the robustness of your AI applications against indirect prompt injection attacks.

Not ideal if you are looking for a defensive tool to prevent prompt injection, as this project focuses on demonstrating vulnerabilities.

AI-security prompt-injection red-teaming LLM-vulnerability-testing application-security
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

39

Forks

3

Language

License

MIT

Last pushed

Aug 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/grepstrength/WideOpenAI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.