tuxsharxsec/Jailbreaks
A repo for all the jailbreaks
This project provides a collection of structured prompts designed to test the security of large language models (LLMs) like Deepseek, Gemini, GPT-5, and Grok. It offers examples of how LLMs can be manipulated to bypass their safety features. AI security researchers and red teamers can use these prompts to identify vulnerabilities and understand how to build more robust AI systems.
No commits in the last 6 months.
Use this if you are an AI security professional or red teamer looking to understand, test, and document methods for bypassing LLM safety guardrails.
Not ideal if you are looking for defensive tools or automated solutions for preventing prompt injections, as this focuses on demonstrating vulnerabilities.
Stars
39
Forks
4
Language
Roff
License
—
Category
Last pushed
Sep 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/tuxsharxsec/Jailbreaks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...