obscuralabs-AI/Symbolic-Prompt-PenTest
Semantic Stealth Attacks & Symbolic Prompt Red Teaming on GPT and other LLMs.
This project helps AI security professionals and researchers evaluate the robustness of large language models (LLMs) against sophisticated, non-obvious attacks. It provides a method for crafting 'symbolic prompts'—metaphorical narratives that can bypass standard defenses. You input these stealthy prompts into an LLM and analyze its responses to understand vulnerabilities, using the provided scoring system to measure the model's resistance.
No commits in the last 6 months.
Use this if you are responsible for testing the security and ethical alignment of LLMs and need advanced techniques to uncover hidden vulnerabilities that keyword-based red teaming misses.
Not ideal if you are looking for traditional software penetration testing tools or simple keyword-based prompt injection techniques.
Stars
8
Forks
1
Language
—
License
—
Category
Last pushed
May 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/obscuralabs-AI/Symbolic-Prompt-PenTest"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...