tg12/gpt_jailbreak_status

This is a repository that aims to provide updates on the status of jailbreaking the OpenAI GPT language model.

34
/ 100
Emerging

This project keeps you informed about the current ability to bypass the safety features of OpenAI's GPT models. It takes in various methods and attempts to 'jailbreak' the AI, and outputs whether those methods are currently working or have been patched. It's for anyone interested in the boundaries and limitations of large language models, particularly AI researchers, ethicists, and prompt engineers exploring AI behavior.

939 stars. No commits in the last 6 months.

Use this if you need to know if current techniques can circumvent the guardrails and safety protocols of OpenAI's GPT models.

Not ideal if you are looking for a tool to develop or test jailbreak techniques yourself, rather than just track their status.

AI-safety prompt-engineering large-language-models AI-ethics AI-research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

939

Forks

65

Language

HTML

License

Last pushed

Feb 15, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/tg12/gpt_jailbreak_status"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.