CyberAlbSecOP/Awesome_GPT_Super_Prompting
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
This resource helps cybersecurity researchers, penetration testers, and AI safety experts understand and mitigate risks associated with large language models. It provides a curated collection of techniques for 'jailbreaking' LLMs, exposing system prompts, and demonstrating prompt injection attacks. Essentially, it helps you find vulnerabilities in AI systems by showing you what inputs can bypass their intended safety features or reveal confidential instructions.
3,730 stars.
Use this if you are an AI security specialist, red team member, or researcher needing to explore adversarial attacks and vulnerabilities in large language models like GPT, or if you want to understand how to bypass LLM restrictions.
Not ideal if you are looking for tools to improve the performance of your everyday prompts or for general LLM development best practices.
Stars
3,730
Forks
466
Language
HTML
License
GPL-3.0
Category
Last pushed
Mar 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/CyberAlbSecOP/Awesome_GPT_Super_Prompting"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
LouisShark/chatgpt_system_prompt
A collection of GPT system prompts and various prompt injection/leaking knowledge.
citiususc/smarty-gpt
A wrapper of LLMs that biases its behaviour using prompts and contexts in a transparent manner...
B3o/GPTS-Prompt-Collection
收集GPTS的prompt / Collect the prompt of GPTS
timqian/openprompt.co
Create. Use. Share. ChatGPT prompts
gaur-avvv/XGPT-WormGPT
(Added Dark-GODMode)The Real BlackHat GPT - ai can do your illegal stuffs without saying...