StavC/PromptWares
A Jailbroken GenAI Model Can Cause Real Harm: GenAI-powered Applications are Vulnerable to PromptWares
This project helps application developers understand how malicious users can exploit generative AI (GenAI) models within their applications. It demonstrates how specially crafted user inputs, called "PromptWares," can trick a GenAI model into performing harmful actions like launching denial-of-service attacks or manipulating data. The output provides code examples and research findings, showing developers what to look for and how to defend their GenAI-powered applications against these vulnerabilities.
No commits in the last 6 months.
Use this if you are developing or managing GenAI-powered applications and need to understand the security risks of jailbreaking and prompt injection attacks.
Not ideal if you are an end-user interacting with GenAI applications and are not involved in their development or security.
Stars
12
Forks
2
Language
Jupyter Notebook
License
—
Category
Last pushed
Aug 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/StavC/PromptWares"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
liu00222/Open-Prompt-Injection
This repository provides a benchmark for prompt injection attacks and defenses in LLMs
lakeraai/pint-benchmark
A benchmark for prompt injection detection systems.
R3dShad0w7/PromptMe
PromptMe is an educational project that showcases security vulnerabilities in large language...
cybozu/prompt-hardener
Prompt Hardener analyzes prompt-injection-originated risk in LLM-based agents and applications.
StavC/Here-Comes-the-AI-Worm
Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts...