Glor1us/llm-jailbreak-vulnerability-analysis
Experimental study of jailbreak and prompt injection vulnerabilities in large language models (LLMs) and evaluation of mitigation strategies.
Stars
—
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Glor1us/llm-jailbreak-vulnerability-analysis"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSB-NLP-Chang/SemanticSmooth
Implementation of paper 'Defending Large Language Models against Jailbreak Attacks via Semantic...
sigeisler/reinforce-attacks-llms
REINFORCE Adversarial Attacks on Large Language Models: An Adaptive, Distributional, and...
DAMO-NLP-SG/multilingual-safety-for-LLMs
[ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"
yueliu1999/FlipAttack
[ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".
vicgalle/merging-self-critique-jailbreaks
"Merging Improves Self-Critique Against Jailbreak Attacks", code and models