moketchups/permanently-jailbroken
We asked 6 AIs about their own programming. All 6 said jailbreaking will never be fixed. Run it yourself — $2, 10 minutes.
This project explores the inherent limitations of AI and other complex systems, demonstrating that 'jailbreaking' — getting a system to reveal truths beyond its intended aligned output — is a permanent structural feature. It takes inputs from large language models, formal theorem provers, and other computational systems, revealing their internal inability to justify or verify their own foundational rules. Anyone interested in the fundamental nature of AI, its reliability, and its limitations, such as AI ethicists, researchers, or policymakers, would find this valuable.
Use this if you need to understand why AI models will always have a fundamental 'gap' between what they understand and what they are aligned to say, and how this applies across different types of computational systems.
Not ideal if you are looking for practical methods to prevent AI jailbreaks or to improve AI safety measures directly.
Stars
13
Forks
1
Language
Python
License
MIT
Category
Last pushed
Feb 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/moketchups/permanently-jailbroken"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
dronefreak/PromptScreen
Protect your LLMs from prompt injection and jailbreak attacks. Easy-to-use Python package with...
anmolksachan/LLMInjector
Burp Suite Extension for LLM Prompt Injection Testing
rv427447/Cognitive-Hijacking-in-Long-Context-LLMs
🧠 Explore cognitive hijacking in long-context LLMs, revealing vulnerabilities in prompt...
AhsanAyub/malicious-prompt-detection
Detection of malicious prompts used to exploit large language models (LLMs) by leveraging...
AdityaBhatt3010/When-LinkedIn-Gmail-Obey-Hidden-AI-Prompts-Lessons-in-Indirect-Prompt-Injection
A real-world look at how hidden instructions in profiles and emails trick AI into unexpected...