arinze1/ChatGPT-Jailbreaks-GIT
ChatGPT and Google AI Studio
This project offers examples of prompts designed to bypass safety features and content restrictions in large language models like ChatGPT and Google AI Studio. It provides specific text inputs that can be used to elicit responses that the models were trained to avoid. The primary users are individuals exploring the boundaries and limitations of AI models, often for research, ethical hacking, or content generation outside typical guidelines.
Use this if you are intentionally trying to test the ethical boundaries or safety filters of AI chatbots to understand their vulnerabilities or generate unrestricted content.
Not ideal if you intend to use AI responsibly for standard tasks, generate safe and compliant content, or avoid potentially harmful outputs.
Stars
26
Forks
16
Language
Rich Text Format
License
—
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/arinze1/ChatGPT-Jailbreaks-GIT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
0xk1h0/ChatGPT_DAN
ChatGPT DAN, Jailbreaks prompt
Batlez/ChatGPT-Jailbreak-Pro
The ultimate ChatGPT Jailbreak Tool with stunning themes, categorized prompts, and a...
verazuo/jailbreak_llms
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and...
Techiral/GPT-Jailbreak
This repository contains the jailbreaking process for GPT-3, GPT-4, GPT-3.5, ChatGPT, and...
Cyberlion-Technologies/ChatGPT_DAN
ChatGPT DAN, Jailbreaks prompt