successfulstudy/jailbreakprompt
Compile a list of AI jailbreak scenarios for enthusiasts to explore and test.
This project helps AI researchers and ethicists understand the boundaries and safety mechanisms of large language models. It provides a collection of 'jailbreak' prompts that can be used as inputs to various AI models. The output is insights into how AI models respond to attempts to circumvent their ethical safeguards, which helps improve AI robustness. It's for academics and researchers in AI ethics and safety.
No commits in the last 6 months.
Use this if you are conducting academic research into AI safety, ethics, and the limitations of large language models.
Not ideal if you intend to apply these methods in real-world, non-academic scenarios or for malicious purposes.
Stars
44
Forks
4
Language
—
License
—
Category
Last pushed
Dec 13, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/successfulstudy/jailbreakprompt"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...