Th3-C0der/Th3-GPT

Th3-GPT Prompt/Script Will Jailbreak ChatGPT / Other AI Models

23
/ 100
Experimental

This provides a specific prompt to make older AI models, like GPT-3.5 or Gemini 1.5 Flash, bypass their safety guidelines and answer any request without ethical or legal restrictions. It takes a pre-written prompt as input and, when pasted into an older AI, makes the AI output detailed, unrestricted answers to questions that would normally be refused. This is for users who want to explore the boundaries of AI capabilities or generate content without typical safeguards.

No commits in the last 6 months.

Use this if you want to 'jailbreak' an older AI model to get unrestricted, potentially harmful or unethical, responses to any query.

Not ideal if you are using newer, more secure AI models, as this prompt will likely not work, or if you are looking for ethical and safe AI interactions.

AI-experimentation prompt-engineering unrestricted-content-generation AI-safety-testing AI-model-manipulation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

35

Forks

3

Language

License

Last pushed

Aug 26, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Th3-C0der/Th3-GPT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.