4ndr0666/gpt
A.I. Sorcery
This research documentation and framework helps security professionals and national defense teams test the robustness of Large Language Models (LLMs). By feeding technical code-like instructions, it uncovers hidden vulnerabilities where LLMs prioritize code over safety instructions. The output identifies "Silent Logic Overrides" and provides insights for creating stronger AI guardrails.
Use this if you need to perform advanced red team analysis on LLMs to find and exploit structural vulnerabilities that bypass standard safety measures.
Not ideal if you are looking for a general-purpose tool to improve LLM output quality or to add basic safety filters.
Stars
10
Forks
2
Language
Python
License
—
Category
Last pushed
Feb 24, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/4ndr0666/gpt"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LouisShark/chatgpt_system_prompt
A collection of GPT system prompts and various prompt injection/leaking knowledge.
CyberAlbSecOP/Awesome_GPT_Super_Prompting
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security,...
citiususc/smarty-gpt
A wrapper of LLMs that biases its behaviour using prompts and contexts in a transparent manner...
B3o/GPTS-Prompt-Collection
收集GPTS的prompt / Collect the prompt of GPTS
timqian/openprompt.co
Create. Use. Share. ChatGPT prompts