jailbreakme-xyz/jailbreak
jailbreakme.xyz is an open-source decentralized app (dApp) where users are challenged to try and jailbreak pre-existing LLMs in order to find weaknesses and be rewarded. 🏆
This platform helps organizations test their AI models for vulnerabilities before they are released. Organizations provide their AI model, and users interact with it, attempting to find weaknesses like 'prompt injection.' Successful 'jailbreaks' earn users rewards, while organizations gain crucial security insights for their AI. This is for AI developers, security teams, and product managers responsible for deploying safe and robust AI systems.
Use this if you are developing or deploying AI models and need to proactively identify and fix prompt injection vulnerabilities and other weaknesses in a distributed testing environment.
Not ideal if you are looking for a general-purpose AI development framework or a platform for typical user acceptance testing.
Stars
45
Forks
19
Language
JavaScript
License
—
Category
Last pushed
Jan 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/jailbreakme-xyz/jailbreak"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...