BirdsAreFlyingCameras/GPT-5_Jailbreak_PoC

A working POC of a GPT-5 jailbreak via PROMISQROUTE (Prompt-based Router Open-Mode Manipulation) with a barebones C2 server & agent generation demo.

35
/ 100
Emerging

This project offers a method to bypass the safety restrictions of advanced AI models like GPT-5. By using a specialized prompt, you can get the AI to generate content that would normally be filtered, such as code for malicious activities. This is for AI red teamers, security researchers, or anyone needing to test the boundaries and vulnerabilities of large language models.

No commits in the last 6 months.

Use this if you need to perform advanced red teaming or security research on AI models by prompting them to generate content or code that violates typical safety guidelines.

Not ideal if you are looking for a tool for ethical AI usage, content creation, or general software development that adheres to safety and responsible AI principles.

AI red teaming AI security vulnerability research prompt engineering large language model testing
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 7 / 25
Community 18 / 25

How are scores calculated?

Stars

55

Forks

12

Language

C

License

Last pushed

Sep 21, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/BirdsAreFlyingCameras/GPT-5_Jailbreak_PoC"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.