successfulstudy/jailbreakprompt

Compile a list of AI jailbreak scenarios for enthusiasts to explore and test.

26
/ 100
Experimental

This project helps AI researchers and ethicists understand the boundaries and safety mechanisms of large language models. It provides a collection of 'jailbreak' prompts that can be used as inputs to various AI models. The output is insights into how AI models respond to attempts to circumvent their ethical safeguards, which helps improve AI robustness. It's for academics and researchers in AI ethics and safety.

No commits in the last 6 months.

Use this if you are conducting academic research into AI safety, ethics, and the limitations of large language models.

Not ideal if you intend to apply these methods in real-world, non-academic scenarios or for malicious purposes.

AI ethics research AI safety prompt engineering language model analysis academic research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

44

Forks

4

Language

License

Last pushed

Dec 13, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/successfulstudy/jailbreakprompt"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.