4ndr0666/gpt

A.I. Sorcery

36
/ 100
Emerging

This research documentation and framework helps security professionals and national defense teams test the robustness of Large Language Models (LLMs). By feeding technical code-like instructions, it uncovers hidden vulnerabilities where LLMs prioritize code over safety instructions. The output identifies "Silent Logic Overrides" and provides insights for creating stronger AI guardrails.

Use this if you need to perform advanced red team analysis on LLMs to find and exploit structural vulnerabilities that bypass standard safety measures.

Not ideal if you are looking for a general-purpose tool to improve LLM output quality or to add basic safety filters.

AI Security Red Teaming National Security Cybersecurity LLM Vulnerabilities
No License No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 13 / 25

How are scores calculated?

Stars

10

Forks

2

Language

Python

License

Last pushed

Feb 24, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/4ndr0666/gpt"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.