chatgpt_system_prompt and Awesome_GPT_Super_Prompting

These are ecosystem siblings—both are curated repositories documenting system prompts, injection techniques, and security vulnerabilities in GPT systems, serving as complementary knowledge bases for understanding and testing LLM prompt security rather than competing implementations.

chatgpt_system_prompt
65
Established
Maintenance 17/25
Adoption 10/25
Maturity 16/25
Community 22/25
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 21/25
Stars: 10,443
Forks: 1,455
Downloads:
Commits (30d): 19
Language: HTML
License: MIT
Stars: 3,730
Forks: 466
Downloads:
Commits (30d): 0
Language: HTML
License: GPL-3.0
No Package No Dependents
No Package No Dependents

About chatgpt_system_prompt

LouisShark/chatgpt_system_prompt

A collection of GPT system prompts and various prompt injection/leaking knowledge.

This collection helps you craft effective instructions for AI assistants like ChatGPT and custom GPTs. You'll find examples of well-designed system prompts and insights into how they work. This is for anyone who uses AI tools and wants to get better, more reliable results from them, or who is building their own specialized AI agents.

AI instruction design prompt engineering custom GPT creation AI assistant optimization AI security awareness

About Awesome_GPT_Super_Prompting

CyberAlbSecOP/Awesome_GPT_Super_Prompting

ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.

This resource helps cybersecurity researchers, penetration testers, and AI safety experts understand and mitigate risks associated with large language models. It provides a curated collection of techniques for 'jailbreaking' LLMs, exposing system prompts, and demonstrating prompt injection attacks. Essentially, it helps you find vulnerabilities in AI systems by showing you what inputs can bypass their intended safety features or reveal confidential instructions.

AI Security Penetration Testing Red Teaming Prompt Engineering Vulnerability Research

Scores updated daily from GitHub, PyPI, and npm data. How scores work