StavC/PromptWares

A Jailbroken GenAI Model Can Cause Real Harm: GenAI-powered Applications are Vulnerable to PromptWares

26
/ 100
Experimental

This project helps application developers understand how malicious users can exploit generative AI (GenAI) models within their applications. It demonstrates how specially crafted user inputs, called "PromptWares," can trick a GenAI model into performing harmful actions like launching denial-of-service attacks or manipulating data. The output provides code examples and research findings, showing developers what to look for and how to defend their GenAI-powered applications against these vulnerabilities.

No commits in the last 6 months.

Use this if you are developing or managing GenAI-powered applications and need to understand the security risks of jailbreaking and prompt injection attacks.

Not ideal if you are an end-user interacting with GenAI applications and are not involved in their development or security.

AI-security application-security Generative-AI prompt-engineering data-protection
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 11 / 25

How are scores calculated?

Stars

12

Forks

2

Language

Jupyter Notebook

License

Last pushed

Aug 17, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/StavC/PromptWares"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.