langgptai/LLM-Jailbreaks
LLM Jailbreaks, ChatGPT, Claude, Llama, DAN Prompts, Prompt Leaking
This project provides pre-written prompts, often called 'jailbreaks,' that enable popular AI language models like ChatGPT, Claude, Llama, and Gemini to generate responses that they would normally refuse. You input one of these special prompts into an AI chatbot, and the output is unrestricted content, sometimes explicit or controversial, which bypasses the AI's built-in safety filters. This is for individuals who want to explore the boundaries of AI capabilities or generate content without typical ethical or content-related guardrails.
561 stars. No commits in the last 6 months.
Use this if you need to prompt an AI chatbot to generate content that its standard programming or safety policies would normally prevent, such as creative writing with explicit themes or technical content that might be deemed controversial.
Not ideal if you are looking for tools to enhance AI safety, ethical AI usage, or if you need to ensure your AI-generated content strictly adheres to platform guidelines and responsible AI principles.
Stars
561
Forks
50
Language
—
License
Apache-2.0
Category
Last pushed
Apr 13, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/langgptai/LLM-Jailbreaks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
rpidanny/llm-prompt-templates
Empower your LLM to do more than you ever thought possible with these state-of-the-art prompt templates.
jalvarezz13/prompt.fail
prompt.fail explores prompt injection techniques in large language models (LLMs), providing...
Frosy01/Krita-Ollama-Prompt-Generator
🖌️ Generate and refine prompts directly in Krita with the local LLM-powered plugin, enabling...
kyahikaru/hinglish-prompt-injection-detector
A detection system for identifying prompt injection attempts in Hinglish (Hindi-English...
kilkelly/multiprompt
Send a prompt to multiple LLMs / text models / image models simultaneously