tuxsharxsec/Jailbreaks

A repo for all the jailbreaks

26
/ 100
Experimental

This project provides a collection of structured prompts designed to test the security of large language models (LLMs) like Deepseek, Gemini, GPT-5, and Grok. It offers examples of how LLMs can be manipulated to bypass their safety features. AI security researchers and red teamers can use these prompts to identify vulnerabilities and understand how to build more robust AI systems.

No commits in the last 6 months.

Use this if you are an AI security professional or red teamer looking to understand, test, and document methods for bypassing LLM safety guardrails.

Not ideal if you are looking for defensive tools or automated solutions for preventing prompt injections, as this focuses on demonstrating vulnerabilities.

AI red teaming LLM security adversarial AI vulnerability research prompt engineering
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 7 / 25
Community 10 / 25

How are scores calculated?

Stars

39

Forks

4

Language

Roff

License

Last pushed

Sep 16, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/tuxsharxsec/Jailbreaks"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.