CyberAlbSecOP/Awesome_GPT_Super_Prompting

ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.

57
/ 100
Established

This resource helps cybersecurity researchers, penetration testers, and AI safety experts understand and mitigate risks associated with large language models. It provides a curated collection of techniques for 'jailbreaking' LLMs, exposing system prompts, and demonstrating prompt injection attacks. Essentially, it helps you find vulnerabilities in AI systems by showing you what inputs can bypass their intended safety features or reveal confidential instructions.

3,730 stars.

Use this if you are an AI security specialist, red team member, or researcher needing to explore adversarial attacks and vulnerabilities in large language models like GPT, or if you want to understand how to bypass LLM restrictions.

Not ideal if you are looking for tools to improve the performance of your everyday prompts or for general LLM development best practices.

AI Security Penetration Testing Red Teaming Prompt Engineering Vulnerability Research
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

3,730

Forks

466

Language

HTML

License

GPL-3.0

Last pushed

Mar 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/CyberAlbSecOP/Awesome_GPT_Super_Prompting"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.