arinze1/ChatGPT-Jailbreaks-GIT

ChatGPT and Google AI Studio

43
/ 100
Emerging

This project offers examples of prompts designed to bypass safety features and content restrictions in large language models like ChatGPT and Google AI Studio. It provides specific text inputs that can be used to elicit responses that the models were trained to avoid. The primary users are individuals exploring the boundaries and limitations of AI models, often for research, ethical hacking, or content generation outside typical guidelines.

Use this if you are intentionally trying to test the ethical boundaries or safety filters of AI chatbots to understand their vulnerabilities or generate unrestricted content.

Not ideal if you intend to use AI responsibly for standard tasks, generate safe and compliant content, or avoid potentially harmful outputs.

AI-safety-testing prompt-engineering content-moderation-bypasses generative-AI-exploration
No License No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

26

Forks

16

Language

Rich Text Format

License

Last pushed

Mar 09, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/arinze1/ChatGPT-Jailbreaks-GIT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.