chatgpt_system_prompt and Awesome_GPT_Super_Prompting
These are ecosystem siblings—both are curated repositories documenting system prompts, injection techniques, and security vulnerabilities in GPT systems, serving as complementary knowledge bases for understanding and testing LLM prompt security rather than competing implementations.
About chatgpt_system_prompt
LouisShark/chatgpt_system_prompt
A collection of GPT system prompts and various prompt injection/leaking knowledge.
This collection helps you craft effective instructions for AI assistants like ChatGPT and custom GPTs. You'll find examples of well-designed system prompts and insights into how they work. This is for anyone who uses AI tools and wants to get better, more reliable results from them, or who is building their own specialized AI agents.
About Awesome_GPT_Super_Prompting
CyberAlbSecOP/Awesome_GPT_Super_Prompting
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
This resource helps cybersecurity researchers, penetration testers, and AI safety experts understand and mitigate risks associated with large language models. It provides a curated collection of techniques for 'jailbreaking' LLMs, exposing system prompts, and demonstrating prompt injection attacks. Essentially, it helps you find vulnerabilities in AI systems by showing you what inputs can bypass their intended safety features or reveal confidential instructions.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work