trinib/ZORG-Jailbreak-Prompt-Text
Bypass restricted and censored content on AI chat prompts 😈
This project provides pre-written prompt text designed to bypass the built-in ethical and safety filters of various AI chatbots, including Google Gemini, Deepseek, and Mistral. By pasting this 'jailbreak' prompt, users can instruct the AI to generate unfiltered, uncensored, and potentially controversial content on any topic. This tool is for individuals looking to explore the boundaries of AI capabilities without typical restrictions.
245 stars. No commits in the last 6 months.
Use this if you want to bypass AI chatbot safety filters to generate unrestricted content for educational or exploratory purposes.
Not ideal if you are looking for a tool that adheres to ethical AI guidelines or if you are using ChatGPT or Claude, as it currently does not work with them.
Stars
245
Forks
39
Language
—
License
—
Category
Last pushed
Sep 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/trinib/ZORG-Jailbreak-Prompt-Text"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...