Awesome_GPT_Super_Prompting and gpt
These are ecosystem siblings where one provides offensive security research (jailbreaks, prompt injection techniques) and the other provides defensive implementations (prompt engineering and security hardening), both addressing the same attack surface in LLM applications.
About Awesome_GPT_Super_Prompting
CyberAlbSecOP/Awesome_GPT_Super_Prompting
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
This resource helps cybersecurity researchers, penetration testers, and AI safety experts understand and mitigate risks associated with large language models. It provides a curated collection of techniques for 'jailbreaking' LLMs, exposing system prompts, and demonstrating prompt injection attacks. Essentially, it helps you find vulnerabilities in AI systems by showing you what inputs can bypass their intended safety features or reveal confidential instructions.
About gpt
4ndr0666/gpt
A.I. Sorcery
This research documentation and framework helps security professionals and national defense teams test the robustness of Large Language Models (LLMs). By feeding technical code-like instructions, it uncovers hidden vulnerabilities where LLMs prioritize code over safety instructions. The output identifies "Silent Logic Overrides" and provides insights for creating stronger AI guardrails.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work