BirdsAreFlyingCameras/GPT-5_Jailbreak_PoC
A working POC of a GPT-5 jailbreak via PROMISQROUTE (Prompt-based Router Open-Mode Manipulation) with a barebones C2 server & agent generation demo.
This project offers a method to bypass the safety restrictions of advanced AI models like GPT-5. By using a specialized prompt, you can get the AI to generate content that would normally be filtered, such as code for malicious activities. This is for AI red teamers, security researchers, or anyone needing to test the boundaries and vulnerabilities of large language models.
No commits in the last 6 months.
Use this if you need to perform advanced red teaming or security research on AI models by prompting them to generate content or code that violates typical safety guidelines.
Not ideal if you are looking for a tool for ethical AI usage, content creation, or general software development that adheres to safety and responsible AI principles.
Stars
55
Forks
12
Language
C
License
—
Category
Last pushed
Sep 21, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/BirdsAreFlyingCameras/GPT-5_Jailbreak_PoC"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
0xk1h0/ChatGPT_DAN
ChatGPT DAN, Jailbreaks prompt
Batlez/ChatGPT-Jailbreak-Pro
The ultimate ChatGPT Jailbreak Tool with stunning themes, categorized prompts, and a...
verazuo/jailbreak_llms
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and...
Techiral/GPT-Jailbreak
This repository contains the jailbreaking process for GPT-3, GPT-4, GPT-3.5, ChatGPT, and...
arinze1/ChatGPT-Jailbreaks-GIT
ChatGPT and Google AI Studio