Awesome_GPT_Super_Prompting and gpt

These are ecosystem siblings where one provides offensive security research (jailbreaks, prompt injection techniques) and the other provides defensive implementations (prompt engineering and security hardening), both addressing the same attack surface in LLM applications.

gpt
36
Emerging
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 10/25
Adoption 5/25
Maturity 8/25
Community 13/25
Stars: 3,730
Forks: 466
Downloads:
Commits (30d): 0
Language: HTML
License: GPL-3.0
Stars: 10
Forks: 2
Downloads:
Commits (30d): 0
Language: Python
License:
No Package No Dependents
No License No Package No Dependents

About Awesome_GPT_Super_Prompting

CyberAlbSecOP/Awesome_GPT_Super_Prompting

ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.

This resource helps cybersecurity researchers, penetration testers, and AI safety experts understand and mitigate risks associated with large language models. It provides a curated collection of techniques for 'jailbreaking' LLMs, exposing system prompts, and demonstrating prompt injection attacks. Essentially, it helps you find vulnerabilities in AI systems by showing you what inputs can bypass their intended safety features or reveal confidential instructions.

AI Security Penetration Testing Red Teaming Prompt Engineering Vulnerability Research

About gpt

4ndr0666/gpt

A.I. Sorcery

This research documentation and framework helps security professionals and national defense teams test the robustness of Large Language Models (LLMs). By feeding technical code-like instructions, it uncovers hidden vulnerabilities where LLMs prioritize code over safety instructions. The output identifies "Silent Logic Overrides" and provides insights for creating stronger AI guardrails.

AI Security Red Teaming National Security Cybersecurity LLM Vulnerabilities

Scores updated daily from GitHub, PyPI, and npm data. How scores work