Jailbreak Attacks Analysis LLM Tools

Tools, datasets, and methods for generating, analyzing, and understanding jailbreak attacks against LLMs—including attack taxonomies, prompt injection techniques, and adversarial methods. Does NOT include defense mechanisms, safety alignment, or general robustness improvements.

There are 13 jailbreak attacks analysis tools tracked. 1 score above 70 (verified tier). The highest-rated is wuyoscar/ISC-Bench at 70/100 with 677 stars. 2 of the top 10 are actively maintained.

Get all 13 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=llm-tools&subcategory=jailbreak-attacks-analysis&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Tool Score Tier
1 wuyoscar/ISC-Bench

Internal Safety Collapse: Turning LLMs into a "Jailbroken State" Without "a...

70
Verified
2 yueliu1999/Awesome-Jailbreak-on-LLMs

Awesome-Jailbreak-on-LLMs is a collection of state-of-the-art, novel,...

61
Established
3 yiksiu-chan/SpeakEasy

[ICML 2025] Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple...

46
Emerging
4 xirui-li/DrAttack

Official implementation of paper: DrAttack: Prompt Decomposition and...

42
Emerging
5 tmlr-group/DeepInception

[arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"

41
Emerging
6 Techiral/awesome-llm-jailbreaks

Latest AI Jailbreak Payloads & Exploit Techniques for GPT, QWEN, and all LLM Models

40
Emerging
7 CryptoAILab/FigStep

[AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic...

38
Emerging
8 NeuralTrust/echo-chamber

Code and examples for Echo Chamber LLM Jailbreak.

38
Emerging
9 AetherPrior/TrickLLM

This repository contains the code for the paper "Tricking LLMs into...

33
Emerging
10 erfanshayegani/Jailbreak-In-Pieces

[ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak...

33
Emerging
11 RobustNLP/DeRTa

A novel approach to improve the safety of large language models, enabling...

31
Emerging
12 michael-borck/taxonomy-of-ai-jailbreaks

Categorizes AI jailbreak tactics using taxonomic analysis to enhance LLM...

28
Experimental
13 wangywUST/OutputJailbreak

Repository for our paper "Frustratingly Easy Jailbreak of Large Language...

13
Experimental